00:00:00.001 Started by upstream project "autotest-per-patch" build number 132310 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.064 Using shallow fetch with depth 1 00:00:00.064 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.064 > git --version # timeout=10 00:00:00.086 > git --version # 'git version 2.39.2' 00:00:00.086 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.120 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.120 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.678 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.689 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.701 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.701 > git config core.sparsecheckout # timeout=10 00:00:02.712 > git read-tree -mu HEAD # timeout=10 00:00:02.725 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.742 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.742 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.840 [Pipeline] Start of Pipeline 00:00:02.856 [Pipeline] library 00:00:02.858 Loading library shm_lib@master 00:00:02.858 Library shm_lib@master is cached. Copying from home. 00:00:02.876 [Pipeline] node 00:00:02.887 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.888 [Pipeline] { 00:00:02.901 [Pipeline] catchError 00:00:02.903 [Pipeline] { 00:00:02.919 [Pipeline] wrap 00:00:02.929 [Pipeline] { 00:00:02.940 [Pipeline] stage 00:00:02.942 [Pipeline] { (Prologue) 00:00:02.967 [Pipeline] echo 00:00:02.969 Node: VM-host-WFP7 00:00:02.977 [Pipeline] cleanWs 00:00:02.989 [WS-CLEANUP] Deleting project workspace... 00:00:02.989 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.995 [WS-CLEANUP] done 00:00:03.206 [Pipeline] setCustomBuildProperty 00:00:03.338 [Pipeline] httpRequest 00:00:03.651 [Pipeline] echo 00:00:03.652 Sorcerer 10.211.164.20 is alive 00:00:03.660 [Pipeline] retry 00:00:03.661 [Pipeline] { 00:00:03.674 [Pipeline] httpRequest 00:00:03.679 HttpMethod: GET 00:00:03.679 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.680 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.681 Response Code: HTTP/1.1 200 OK 00:00:03.681 Success: Status code 200 is in the accepted range: 200,404 00:00:03.681 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.827 [Pipeline] } 00:00:03.839 [Pipeline] // retry 00:00:03.846 [Pipeline] sh 00:00:04.123 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.138 [Pipeline] httpRequest 00:00:04.442 [Pipeline] echo 00:00:04.444 Sorcerer 10.211.164.20 is alive 00:00:04.452 [Pipeline] retry 00:00:04.454 [Pipeline] { 00:00:04.469 [Pipeline] httpRequest 00:00:04.474 HttpMethod: GET 00:00:04.475 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:04.475 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:04.476 Response Code: HTTP/1.1 200 OK 00:00:04.476 Success: Status code 200 is in the accepted range: 200,404 00:00:04.477 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:58.736 [Pipeline] } 00:00:58.754 [Pipeline] // retry 00:00:58.762 [Pipeline] sh 00:00:59.045 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:01:01.593 [Pipeline] sh 00:01:01.878 + git -C spdk log --oneline -n5 00:01:01.878 d47eb51c9 bdev: fix a race between reset start and complete 00:01:01.878 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:01.878 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:01.878 4bcab9fb9 correct kick for CQ full case 00:01:01.878 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:01.898 [Pipeline] writeFile 00:01:01.915 [Pipeline] sh 00:01:02.243 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:02.257 [Pipeline] sh 00:01:02.540 + cat autorun-spdk.conf 00:01:02.540 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.540 SPDK_RUN_ASAN=1 00:01:02.540 SPDK_RUN_UBSAN=1 00:01:02.540 SPDK_TEST_RAID=1 00:01:02.540 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.547 RUN_NIGHTLY=0 00:01:02.549 [Pipeline] } 00:01:02.562 [Pipeline] // stage 00:01:02.578 [Pipeline] stage 00:01:02.580 [Pipeline] { (Run VM) 00:01:02.593 [Pipeline] sh 00:01:02.876 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:02.876 + echo 'Start stage prepare_nvme.sh' 00:01:02.876 Start stage prepare_nvme.sh 00:01:02.876 + [[ -n 7 ]] 00:01:02.876 + disk_prefix=ex7 00:01:02.876 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:01:02.876 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:01:02.876 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:01:02.876 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.876 ++ SPDK_RUN_ASAN=1 00:01:02.877 ++ SPDK_RUN_UBSAN=1 00:01:02.877 ++ SPDK_TEST_RAID=1 00:01:02.877 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.877 ++ RUN_NIGHTLY=0 00:01:02.877 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:01:02.877 + nvme_files=() 00:01:02.877 + declare -A nvme_files 00:01:02.877 + backend_dir=/var/lib/libvirt/images/backends 00:01:02.877 + nvme_files['nvme.img']=5G 00:01:02.877 + nvme_files['nvme-cmb.img']=5G 00:01:02.877 + nvme_files['nvme-multi0.img']=4G 00:01:02.877 + nvme_files['nvme-multi1.img']=4G 00:01:02.877 + nvme_files['nvme-multi2.img']=4G 00:01:02.877 + nvme_files['nvme-openstack.img']=8G 00:01:02.877 + nvme_files['nvme-zns.img']=5G 00:01:02.877 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:02.877 + (( SPDK_TEST_FTL == 1 )) 00:01:02.877 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:02.877 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:02.877 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.877 + for nvme in "${!nvme_files[@]}" 00:01:02.877 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:03.136 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.136 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:03.136 + echo 'End stage prepare_nvme.sh' 00:01:03.136 End stage prepare_nvme.sh 00:01:03.149 [Pipeline] sh 00:01:03.435 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:03.435 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:03.435 00:01:03.435 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:01:03.435 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:03.435 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:01:03.435 HELP=0 00:01:03.435 DRY_RUN=0 00:01:03.435 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:03.435 NVME_DISKS_TYPE=nvme,nvme, 00:01:03.435 NVME_AUTO_CREATE=0 00:01:03.435 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:03.435 NVME_CMB=,, 00:01:03.435 NVME_PMR=,, 00:01:03.435 NVME_ZNS=,, 00:01:03.435 NVME_MS=,, 00:01:03.435 NVME_FDP=,, 00:01:03.435 SPDK_VAGRANT_DISTRO=fedora39 00:01:03.435 SPDK_VAGRANT_VMCPU=10 00:01:03.435 SPDK_VAGRANT_VMRAM=12288 00:01:03.435 SPDK_VAGRANT_PROVIDER=libvirt 00:01:03.435 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:03.435 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:03.435 SPDK_OPENSTACK_NETWORK=0 00:01:03.435 VAGRANT_PACKAGE_BOX=0 00:01:03.435 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:03.435 FORCE_DISTRO=true 00:01:03.435 VAGRANT_BOX_VERSION= 00:01:03.435 EXTRA_VAGRANTFILES= 00:01:03.435 NIC_MODEL=virtio 00:01:03.435 00:01:03.435 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:01:03.435 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:01:05.969 Bringing machine 'default' up with 'libvirt' provider... 00:01:06.538 ==> default: Creating image (snapshot of base box volume). 00:01:06.538 ==> default: Creating domain with the following settings... 00:01:06.538 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731935856_40a45c25682a797372af 00:01:06.538 ==> default: -- Domain type: kvm 00:01:06.538 ==> default: -- Cpus: 10 00:01:06.538 ==> default: -- Feature: acpi 00:01:06.538 ==> default: -- Feature: apic 00:01:06.538 ==> default: -- Feature: pae 00:01:06.538 ==> default: -- Memory: 12288M 00:01:06.538 ==> default: -- Memory Backing: hugepages: 00:01:06.538 ==> default: -- Management MAC: 00:01:06.538 ==> default: -- Loader: 00:01:06.538 ==> default: -- Nvram: 00:01:06.538 ==> default: -- Base box: spdk/fedora39 00:01:06.538 ==> default: -- Storage pool: default 00:01:06.538 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731935856_40a45c25682a797372af.img (20G) 00:01:06.538 ==> default: -- Volume Cache: default 00:01:06.538 ==> default: -- Kernel: 00:01:06.538 ==> default: -- Initrd: 00:01:06.538 ==> default: -- Graphics Type: vnc 00:01:06.538 ==> default: -- Graphics Port: -1 00:01:06.538 ==> default: -- Graphics IP: 127.0.0.1 00:01:06.538 ==> default: -- Graphics Password: Not defined 00:01:06.538 ==> default: -- Video Type: cirrus 00:01:06.538 ==> default: -- Video VRAM: 9216 00:01:06.538 ==> default: -- Sound Type: 00:01:06.538 ==> default: -- Keymap: en-us 00:01:06.538 ==> default: -- TPM Path: 00:01:06.538 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:06.538 ==> default: -- Command line args: 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:06.538 ==> default: -> value=-drive, 00:01:06.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:06.538 ==> default: -> value=-drive, 00:01:06.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.538 ==> default: -> value=-drive, 00:01:06.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.538 ==> default: -> value=-drive, 00:01:06.538 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:06.538 ==> default: -> value=-device, 00:01:06.538 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:06.798 ==> default: Creating shared folders metadata... 00:01:06.798 ==> default: Starting domain. 00:01:08.705 ==> default: Waiting for domain to get an IP address... 00:01:23.590 ==> default: Waiting for SSH to become available... 00:01:24.965 ==> default: Configuring and enabling network interfaces... 00:01:31.579 default: SSH address: 192.168.121.185:22 00:01:31.579 default: SSH username: vagrant 00:01:31.579 default: SSH auth method: private key 00:01:33.485 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.471 ==> default: Mounting SSHFS shared folder... 00:01:44.409 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:44.409 ==> default: Checking Mount.. 00:01:46.313 ==> default: Folder Successfully Mounted! 00:01:46.313 ==> default: Running provisioner: file... 00:01:47.250 default: ~/.gitconfig => .gitconfig 00:01:47.508 00:01:47.508 SUCCESS! 00:01:47.508 00:01:47.508 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:47.508 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:47.508 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:47.508 00:01:47.517 [Pipeline] } 00:01:47.534 [Pipeline] // stage 00:01:47.543 [Pipeline] dir 00:01:47.544 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:47.545 [Pipeline] { 00:01:47.560 [Pipeline] catchError 00:01:47.562 [Pipeline] { 00:01:47.573 [Pipeline] sh 00:01:47.850 + vagrant ssh-config --host vagrant 00:01:47.850 + sed -ne /^Host/,$p 00:01:47.850 + tee ssh_conf 00:01:50.446 Host vagrant 00:01:50.446 HostName 192.168.121.185 00:01:50.446 User vagrant 00:01:50.446 Port 22 00:01:50.446 UserKnownHostsFile /dev/null 00:01:50.446 StrictHostKeyChecking no 00:01:50.446 PasswordAuthentication no 00:01:50.447 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:50.447 IdentitiesOnly yes 00:01:50.447 LogLevel FATAL 00:01:50.447 ForwardAgent yes 00:01:50.447 ForwardX11 yes 00:01:50.447 00:01:50.460 [Pipeline] withEnv 00:01:50.462 [Pipeline] { 00:01:50.476 [Pipeline] sh 00:01:50.755 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.755 source /etc/os-release 00:01:50.755 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.755 # Minimal, systemd-like check. 00:01:50.755 if [[ -e /.dockerenv ]]; then 00:01:50.755 # Clear garbage from the node's name: 00:01:50.755 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.755 # $HOSTNAME is the actual container id 00:01:50.755 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.755 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.755 # We can assume this is a mount from a host where container is running, 00:01:50.755 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.755 container="$(< /etc/hostname) ($agent)" 00:01:50.755 else 00:01:50.755 # Fallback 00:01:50.755 container=$agent 00:01:50.755 fi 00:01:50.755 fi 00:01:50.755 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.755 00:01:51.025 [Pipeline] } 00:01:51.041 [Pipeline] // withEnv 00:01:51.050 [Pipeline] setCustomBuildProperty 00:01:51.066 [Pipeline] stage 00:01:51.068 [Pipeline] { (Tests) 00:01:51.086 [Pipeline] sh 00:01:51.364 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.637 [Pipeline] sh 00:01:51.917 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:52.191 [Pipeline] timeout 00:01:52.191 Timeout set to expire in 1 hr 30 min 00:01:52.193 [Pipeline] { 00:01:52.208 [Pipeline] sh 00:01:52.487 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:53.056 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:01:53.068 [Pipeline] sh 00:01:53.417 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:53.692 [Pipeline] sh 00:01:53.975 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:54.253 [Pipeline] sh 00:01:54.537 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:54.797 ++ readlink -f spdk_repo 00:01:54.797 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.797 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.797 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.797 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.797 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.797 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.797 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.797 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:54.797 + cd /home/vagrant/spdk_repo 00:01:54.797 + source /etc/os-release 00:01:54.797 ++ NAME='Fedora Linux' 00:01:54.797 ++ VERSION='39 (Cloud Edition)' 00:01:54.797 ++ ID=fedora 00:01:54.797 ++ VERSION_ID=39 00:01:54.797 ++ VERSION_CODENAME= 00:01:54.797 ++ PLATFORM_ID=platform:f39 00:01:54.797 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.797 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.797 ++ LOGO=fedora-logo-icon 00:01:54.797 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.797 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.797 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.797 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.797 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.797 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.797 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.797 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.797 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.797 ++ SUPPORT_END=2024-11-12 00:01:54.797 ++ VARIANT='Cloud Edition' 00:01:54.797 ++ VARIANT_ID=cloud 00:01:54.797 + uname -a 00:01:54.797 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.797 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:55.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:55.367 Hugepages 00:01:55.367 node hugesize free / total 00:01:55.367 node0 1048576kB 0 / 0 00:01:55.367 node0 2048kB 0 / 0 00:01:55.367 00:01:55.367 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.367 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:55.367 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.367 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:55.367 + rm -f /tmp/spdk-ld-path 00:01:55.367 + source autorun-spdk.conf 00:01:55.367 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.367 ++ SPDK_RUN_ASAN=1 00:01:55.367 ++ SPDK_RUN_UBSAN=1 00:01:55.367 ++ SPDK_TEST_RAID=1 00:01:55.367 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.367 ++ RUN_NIGHTLY=0 00:01:55.367 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.367 + [[ -n '' ]] 00:01:55.367 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.367 + for M in /var/spdk/build-*-manifest.txt 00:01:55.367 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.367 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.627 + for M in /var/spdk/build-*-manifest.txt 00:01:55.627 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.627 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.627 + for M in /var/spdk/build-*-manifest.txt 00:01:55.627 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.627 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.627 ++ uname 00:01:55.627 + [[ Linux == \L\i\n\u\x ]] 00:01:55.627 + sudo dmesg -T 00:01:55.627 + sudo dmesg --clear 00:01:55.627 + dmesg_pid=5418 00:01:55.627 + [[ Fedora Linux == FreeBSD ]] 00:01:55.627 + sudo dmesg -Tw 00:01:55.627 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.627 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.627 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.627 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.627 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.627 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.627 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.627 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.627 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.627 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.627 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.627 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.627 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.627 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.627 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.627 13:18:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:55.627 13:18:25 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.627 13:18:25 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:55.627 13:18:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:55.627 13:18:25 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.888 13:18:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:55.888 13:18:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.888 13:18:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.888 13:18:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.888 13:18:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.888 13:18:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.888 13:18:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.888 13:18:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.888 13:18:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.888 13:18:25 -- paths/export.sh@5 -- $ export PATH 00:01:55.888 13:18:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.888 13:18:25 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.888 13:18:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:55.888 13:18:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731935905.XXXXXX 00:01:55.888 13:18:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731935905.VAwWTv 00:01:55.888 13:18:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:55.888 13:18:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:55.888 13:18:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:55.888 13:18:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.888 13:18:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.888 13:18:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:55.888 13:18:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:55.888 13:18:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.888 13:18:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:55.888 13:18:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:55.888 13:18:25 -- pm/common@17 -- $ local monitor 00:01:55.888 13:18:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.888 13:18:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.888 13:18:25 -- pm/common@25 -- $ sleep 1 00:01:55.888 13:18:25 -- pm/common@21 -- $ date +%s 00:01:55.888 13:18:25 -- pm/common@21 -- $ date +%s 00:01:55.888 13:18:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731935905 00:01:55.888 13:18:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731935905 00:01:55.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731935905_collect-cpu-load.pm.log 00:01:55.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731935905_collect-vmstat.pm.log 00:01:56.830 13:18:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:56.830 13:18:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.830 13:18:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.830 13:18:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.830 13:18:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.830 Mon Nov 18 01:18:26 PM UTC 2024 00:01:56.830 13:18:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.830 v25.01-pre-190-gd47eb51c9 00:01:56.830 13:18:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.830 13:18:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.830 13:18:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.830 13:18:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.830 13:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.830 ************************************ 00:01:56.830 START TEST asan 00:01:56.830 ************************************ 00:01:56.830 using asan 00:01:56.830 13:18:26 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:56.830 00:01:56.830 real 0m0.000s 00:01:56.830 user 0m0.000s 00:01:56.830 sys 0m0.000s 00:01:56.830 13:18:26 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.830 13:18:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.830 ************************************ 00:01:56.830 END TEST asan 00:01:56.830 ************************************ 00:01:56.830 13:18:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.830 13:18:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.830 13:18:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.830 13:18:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.830 13:18:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.830 ************************************ 00:01:56.830 START TEST ubsan 00:01:56.830 ************************************ 00:01:56.830 using ubsan 00:01:56.830 13:18:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:56.830 00:01:56.830 real 0m0.000s 00:01:56.830 user 0m0.000s 00:01:56.830 sys 0m0.000s 00:01:56.830 13:18:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.830 13:18:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.830 ************************************ 00:01:56.830 END TEST ubsan 00:01:56.830 ************************************ 00:01:57.090 13:18:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.090 13:18:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.090 13:18:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.090 13:18:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:57.090 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:57.090 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.658 Using 'verbs' RDMA provider 00:02:13.482 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:31.569 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:31.569 Creating mk/config.mk...done. 00:02:31.569 Creating mk/cc.flags.mk...done. 00:02:31.569 Type 'make' to build. 00:02:31.569 13:18:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:31.569 13:18:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:31.569 13:18:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:31.569 13:18:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.569 ************************************ 00:02:31.569 START TEST make 00:02:31.569 ************************************ 00:02:31.569 13:18:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:31.569 make[1]: Nothing to be done for 'all'. 00:02:41.653 The Meson build system 00:02:41.653 Version: 1.5.0 00:02:41.653 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.653 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.653 Build type: native build 00:02:41.653 Program cat found: YES (/usr/bin/cat) 00:02:41.653 Project name: DPDK 00:02:41.653 Project version: 24.03.0 00:02:41.653 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:41.653 C linker for the host machine: cc ld.bfd 2.40-14 00:02:41.653 Host machine cpu family: x86_64 00:02:41.653 Host machine cpu: x86_64 00:02:41.653 Message: ## Building in Developer Mode ## 00:02:41.653 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.653 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.653 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.653 Program python3 found: YES (/usr/bin/python3) 00:02:41.653 Program cat found: YES (/usr/bin/cat) 00:02:41.653 Compiler for C supports arguments -march=native: YES 00:02:41.653 Checking for size of "void *" : 8 00:02:41.653 Checking for size of "void *" : 8 (cached) 00:02:41.653 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:41.653 Library m found: YES 00:02:41.653 Library numa found: YES 00:02:41.653 Has header "numaif.h" : YES 00:02:41.653 Library fdt found: NO 00:02:41.653 Library execinfo found: NO 00:02:41.653 Has header "execinfo.h" : YES 00:02:41.653 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:41.653 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.653 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.653 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.653 Run-time dependency openssl found: YES 3.1.1 00:02:41.653 Run-time dependency libpcap found: YES 1.10.4 00:02:41.653 Has header "pcap.h" with dependency libpcap: YES 00:02:41.653 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.653 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.653 Compiler for C supports arguments -Wformat: YES 00:02:41.653 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:41.653 Compiler for C supports arguments -Wformat-security: NO 00:02:41.653 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.653 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.653 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.653 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.653 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.653 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.653 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.653 Compiler for C supports arguments -Wundef: YES 00:02:41.653 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.653 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.653 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.653 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.653 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.654 Program objdump found: YES (/usr/bin/objdump) 00:02:41.654 Compiler for C supports arguments -mavx512f: YES 00:02:41.654 Checking if "AVX512 checking" compiles: YES 00:02:41.654 Fetching value of define "__SSE4_2__" : 1 00:02:41.654 Fetching value of define "__AES__" : 1 00:02:41.654 Fetching value of define "__AVX__" : 1 00:02:41.654 Fetching value of define "__AVX2__" : 1 00:02:41.654 Fetching value of define "__AVX512BW__" : 1 00:02:41.654 Fetching value of define "__AVX512CD__" : 1 00:02:41.654 Fetching value of define "__AVX512DQ__" : 1 00:02:41.654 Fetching value of define "__AVX512F__" : 1 00:02:41.654 Fetching value of define "__AVX512VL__" : 1 00:02:41.654 Fetching value of define "__PCLMUL__" : 1 00:02:41.654 Fetching value of define "__RDRND__" : 1 00:02:41.654 Fetching value of define "__RDSEED__" : 1 00:02:41.654 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.654 Fetching value of define "__znver1__" : (undefined) 00:02:41.654 Fetching value of define "__znver2__" : (undefined) 00:02:41.654 Fetching value of define "__znver3__" : (undefined) 00:02:41.654 Fetching value of define "__znver4__" : (undefined) 00:02:41.654 Library asan found: YES 00:02:41.654 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.654 Message: lib/log: Defining dependency "log" 00:02:41.654 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.654 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.654 Library rt found: YES 00:02:41.654 Checking for function "getentropy" : NO 00:02:41.654 Message: lib/eal: Defining dependency "eal" 00:02:41.654 Message: lib/ring: Defining dependency "ring" 00:02:41.654 Message: lib/rcu: Defining dependency "rcu" 00:02:41.654 Message: lib/mempool: Defining dependency "mempool" 00:02:41.654 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.654 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.654 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.654 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.654 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.654 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.654 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:41.654 Compiler for C supports arguments -mpclmul: YES 00:02:41.654 Compiler for C supports arguments -maes: YES 00:02:41.654 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.654 Compiler for C supports arguments -mavx512bw: YES 00:02:41.654 Compiler for C supports arguments -mavx512dq: YES 00:02:41.654 Compiler for C supports arguments -mavx512vl: YES 00:02:41.654 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.654 Compiler for C supports arguments -mavx2: YES 00:02:41.654 Compiler for C supports arguments -mavx: YES 00:02:41.654 Message: lib/net: Defining dependency "net" 00:02:41.654 Message: lib/meter: Defining dependency "meter" 00:02:41.654 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.654 Message: lib/pci: Defining dependency "pci" 00:02:41.654 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.654 Message: lib/hash: Defining dependency "hash" 00:02:41.654 Message: lib/timer: Defining dependency "timer" 00:02:41.654 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.654 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.654 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.654 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.654 Message: lib/power: Defining dependency "power" 00:02:41.654 Message: lib/reorder: Defining dependency "reorder" 00:02:41.654 Message: lib/security: Defining dependency "security" 00:02:41.654 Has header "linux/userfaultfd.h" : YES 00:02:41.654 Has header "linux/vduse.h" : YES 00:02:41.654 Message: lib/vhost: Defining dependency "vhost" 00:02:41.654 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.654 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.654 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.654 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.654 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.654 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.654 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.654 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.654 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.654 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.654 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:41.654 Configuring doxy-api-html.conf using configuration 00:02:41.654 Configuring doxy-api-man.conf using configuration 00:02:41.654 Program mandb found: YES (/usr/bin/mandb) 00:02:41.654 Program sphinx-build found: NO 00:02:41.654 Configuring rte_build_config.h using configuration 00:02:41.654 Message: 00:02:41.654 ================= 00:02:41.654 Applications Enabled 00:02:41.654 ================= 00:02:41.654 00:02:41.654 apps: 00:02:41.654 00:02:41.654 00:02:41.654 Message: 00:02:41.654 ================= 00:02:41.654 Libraries Enabled 00:02:41.654 ================= 00:02:41.654 00:02:41.654 libs: 00:02:41.654 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.654 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.654 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.654 00:02:41.654 Message: 00:02:41.654 =============== 00:02:41.654 Drivers Enabled 00:02:41.654 =============== 00:02:41.654 00:02:41.654 common: 00:02:41.654 00:02:41.654 bus: 00:02:41.654 pci, vdev, 00:02:41.654 mempool: 00:02:41.654 ring, 00:02:41.654 dma: 00:02:41.654 00:02:41.654 net: 00:02:41.654 00:02:41.654 crypto: 00:02:41.654 00:02:41.654 compress: 00:02:41.654 00:02:41.654 vdpa: 00:02:41.654 00:02:41.654 00:02:41.654 Message: 00:02:41.654 ================= 00:02:41.654 Content Skipped 00:02:41.654 ================= 00:02:41.654 00:02:41.654 apps: 00:02:41.654 dumpcap: explicitly disabled via build config 00:02:41.654 graph: explicitly disabled via build config 00:02:41.654 pdump: explicitly disabled via build config 00:02:41.654 proc-info: explicitly disabled via build config 00:02:41.654 test-acl: explicitly disabled via build config 00:02:41.654 test-bbdev: explicitly disabled via build config 00:02:41.654 test-cmdline: explicitly disabled via build config 00:02:41.654 test-compress-perf: explicitly disabled via build config 00:02:41.654 test-crypto-perf: explicitly disabled via build config 00:02:41.654 test-dma-perf: explicitly disabled via build config 00:02:41.654 test-eventdev: explicitly disabled via build config 00:02:41.654 test-fib: explicitly disabled via build config 00:02:41.654 test-flow-perf: explicitly disabled via build config 00:02:41.654 test-gpudev: explicitly disabled via build config 00:02:41.654 test-mldev: explicitly disabled via build config 00:02:41.654 test-pipeline: explicitly disabled via build config 00:02:41.654 test-pmd: explicitly disabled via build config 00:02:41.654 test-regex: explicitly disabled via build config 00:02:41.654 test-sad: explicitly disabled via build config 00:02:41.654 test-security-perf: explicitly disabled via build config 00:02:41.654 00:02:41.654 libs: 00:02:41.654 argparse: explicitly disabled via build config 00:02:41.654 metrics: explicitly disabled via build config 00:02:41.654 acl: explicitly disabled via build config 00:02:41.654 bbdev: explicitly disabled via build config 00:02:41.654 bitratestats: explicitly disabled via build config 00:02:41.654 bpf: explicitly disabled via build config 00:02:41.654 cfgfile: explicitly disabled via build config 00:02:41.654 distributor: explicitly disabled via build config 00:02:41.654 efd: explicitly disabled via build config 00:02:41.654 eventdev: explicitly disabled via build config 00:02:41.654 dispatcher: explicitly disabled via build config 00:02:41.654 gpudev: explicitly disabled via build config 00:02:41.654 gro: explicitly disabled via build config 00:02:41.654 gso: explicitly disabled via build config 00:02:41.654 ip_frag: explicitly disabled via build config 00:02:41.654 jobstats: explicitly disabled via build config 00:02:41.654 latencystats: explicitly disabled via build config 00:02:41.654 lpm: explicitly disabled via build config 00:02:41.654 member: explicitly disabled via build config 00:02:41.654 pcapng: explicitly disabled via build config 00:02:41.654 rawdev: explicitly disabled via build config 00:02:41.654 regexdev: explicitly disabled via build config 00:02:41.654 mldev: explicitly disabled via build config 00:02:41.654 rib: explicitly disabled via build config 00:02:41.654 sched: explicitly disabled via build config 00:02:41.654 stack: explicitly disabled via build config 00:02:41.654 ipsec: explicitly disabled via build config 00:02:41.654 pdcp: explicitly disabled via build config 00:02:41.654 fib: explicitly disabled via build config 00:02:41.654 port: explicitly disabled via build config 00:02:41.654 pdump: explicitly disabled via build config 00:02:41.654 table: explicitly disabled via build config 00:02:41.654 pipeline: explicitly disabled via build config 00:02:41.654 graph: explicitly disabled via build config 00:02:41.654 node: explicitly disabled via build config 00:02:41.654 00:02:41.654 drivers: 00:02:41.654 common/cpt: not in enabled drivers build config 00:02:41.654 common/dpaax: not in enabled drivers build config 00:02:41.654 common/iavf: not in enabled drivers build config 00:02:41.654 common/idpf: not in enabled drivers build config 00:02:41.654 common/ionic: not in enabled drivers build config 00:02:41.654 common/mvep: not in enabled drivers build config 00:02:41.654 common/octeontx: not in enabled drivers build config 00:02:41.654 bus/auxiliary: not in enabled drivers build config 00:02:41.654 bus/cdx: not in enabled drivers build config 00:02:41.654 bus/dpaa: not in enabled drivers build config 00:02:41.654 bus/fslmc: not in enabled drivers build config 00:02:41.654 bus/ifpga: not in enabled drivers build config 00:02:41.655 bus/platform: not in enabled drivers build config 00:02:41.655 bus/uacce: not in enabled drivers build config 00:02:41.655 bus/vmbus: not in enabled drivers build config 00:02:41.655 common/cnxk: not in enabled drivers build config 00:02:41.655 common/mlx5: not in enabled drivers build config 00:02:41.655 common/nfp: not in enabled drivers build config 00:02:41.655 common/nitrox: not in enabled drivers build config 00:02:41.655 common/qat: not in enabled drivers build config 00:02:41.655 common/sfc_efx: not in enabled drivers build config 00:02:41.655 mempool/bucket: not in enabled drivers build config 00:02:41.655 mempool/cnxk: not in enabled drivers build config 00:02:41.655 mempool/dpaa: not in enabled drivers build config 00:02:41.655 mempool/dpaa2: not in enabled drivers build config 00:02:41.655 mempool/octeontx: not in enabled drivers build config 00:02:41.655 mempool/stack: not in enabled drivers build config 00:02:41.655 dma/cnxk: not in enabled drivers build config 00:02:41.655 dma/dpaa: not in enabled drivers build config 00:02:41.655 dma/dpaa2: not in enabled drivers build config 00:02:41.655 dma/hisilicon: not in enabled drivers build config 00:02:41.655 dma/idxd: not in enabled drivers build config 00:02:41.655 dma/ioat: not in enabled drivers build config 00:02:41.655 dma/skeleton: not in enabled drivers build config 00:02:41.655 net/af_packet: not in enabled drivers build config 00:02:41.655 net/af_xdp: not in enabled drivers build config 00:02:41.655 net/ark: not in enabled drivers build config 00:02:41.655 net/atlantic: not in enabled drivers build config 00:02:41.655 net/avp: not in enabled drivers build config 00:02:41.655 net/axgbe: not in enabled drivers build config 00:02:41.655 net/bnx2x: not in enabled drivers build config 00:02:41.655 net/bnxt: not in enabled drivers build config 00:02:41.655 net/bonding: not in enabled drivers build config 00:02:41.655 net/cnxk: not in enabled drivers build config 00:02:41.655 net/cpfl: not in enabled drivers build config 00:02:41.655 net/cxgbe: not in enabled drivers build config 00:02:41.655 net/dpaa: not in enabled drivers build config 00:02:41.655 net/dpaa2: not in enabled drivers build config 00:02:41.655 net/e1000: not in enabled drivers build config 00:02:41.655 net/ena: not in enabled drivers build config 00:02:41.655 net/enetc: not in enabled drivers build config 00:02:41.655 net/enetfec: not in enabled drivers build config 00:02:41.655 net/enic: not in enabled drivers build config 00:02:41.655 net/failsafe: not in enabled drivers build config 00:02:41.655 net/fm10k: not in enabled drivers build config 00:02:41.655 net/gve: not in enabled drivers build config 00:02:41.655 net/hinic: not in enabled drivers build config 00:02:41.655 net/hns3: not in enabled drivers build config 00:02:41.655 net/i40e: not in enabled drivers build config 00:02:41.655 net/iavf: not in enabled drivers build config 00:02:41.655 net/ice: not in enabled drivers build config 00:02:41.655 net/idpf: not in enabled drivers build config 00:02:41.655 net/igc: not in enabled drivers build config 00:02:41.655 net/ionic: not in enabled drivers build config 00:02:41.655 net/ipn3ke: not in enabled drivers build config 00:02:41.655 net/ixgbe: not in enabled drivers build config 00:02:41.655 net/mana: not in enabled drivers build config 00:02:41.655 net/memif: not in enabled drivers build config 00:02:41.655 net/mlx4: not in enabled drivers build config 00:02:41.655 net/mlx5: not in enabled drivers build config 00:02:41.655 net/mvneta: not in enabled drivers build config 00:02:41.655 net/mvpp2: not in enabled drivers build config 00:02:41.655 net/netvsc: not in enabled drivers build config 00:02:41.655 net/nfb: not in enabled drivers build config 00:02:41.655 net/nfp: not in enabled drivers build config 00:02:41.655 net/ngbe: not in enabled drivers build config 00:02:41.655 net/null: not in enabled drivers build config 00:02:41.655 net/octeontx: not in enabled drivers build config 00:02:41.655 net/octeon_ep: not in enabled drivers build config 00:02:41.655 net/pcap: not in enabled drivers build config 00:02:41.655 net/pfe: not in enabled drivers build config 00:02:41.655 net/qede: not in enabled drivers build config 00:02:41.655 net/ring: not in enabled drivers build config 00:02:41.655 net/sfc: not in enabled drivers build config 00:02:41.655 net/softnic: not in enabled drivers build config 00:02:41.655 net/tap: not in enabled drivers build config 00:02:41.655 net/thunderx: not in enabled drivers build config 00:02:41.655 net/txgbe: not in enabled drivers build config 00:02:41.655 net/vdev_netvsc: not in enabled drivers build config 00:02:41.655 net/vhost: not in enabled drivers build config 00:02:41.655 net/virtio: not in enabled drivers build config 00:02:41.655 net/vmxnet3: not in enabled drivers build config 00:02:41.655 raw/*: missing internal dependency, "rawdev" 00:02:41.655 crypto/armv8: not in enabled drivers build config 00:02:41.655 crypto/bcmfs: not in enabled drivers build config 00:02:41.655 crypto/caam_jr: not in enabled drivers build config 00:02:41.655 crypto/ccp: not in enabled drivers build config 00:02:41.655 crypto/cnxk: not in enabled drivers build config 00:02:41.655 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.655 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.655 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.655 crypto/mlx5: not in enabled drivers build config 00:02:41.655 crypto/mvsam: not in enabled drivers build config 00:02:41.655 crypto/nitrox: not in enabled drivers build config 00:02:41.655 crypto/null: not in enabled drivers build config 00:02:41.655 crypto/octeontx: not in enabled drivers build config 00:02:41.655 crypto/openssl: not in enabled drivers build config 00:02:41.655 crypto/scheduler: not in enabled drivers build config 00:02:41.655 crypto/uadk: not in enabled drivers build config 00:02:41.655 crypto/virtio: not in enabled drivers build config 00:02:41.655 compress/isal: not in enabled drivers build config 00:02:41.655 compress/mlx5: not in enabled drivers build config 00:02:41.655 compress/nitrox: not in enabled drivers build config 00:02:41.655 compress/octeontx: not in enabled drivers build config 00:02:41.655 compress/zlib: not in enabled drivers build config 00:02:41.655 regex/*: missing internal dependency, "regexdev" 00:02:41.655 ml/*: missing internal dependency, "mldev" 00:02:41.655 vdpa/ifc: not in enabled drivers build config 00:02:41.655 vdpa/mlx5: not in enabled drivers build config 00:02:41.655 vdpa/nfp: not in enabled drivers build config 00:02:41.655 vdpa/sfc: not in enabled drivers build config 00:02:41.655 event/*: missing internal dependency, "eventdev" 00:02:41.655 baseband/*: missing internal dependency, "bbdev" 00:02:41.655 gpu/*: missing internal dependency, "gpudev" 00:02:41.655 00:02:41.655 00:02:41.655 Build targets in project: 85 00:02:41.655 00:02:41.655 DPDK 24.03.0 00:02:41.655 00:02:41.655 User defined options 00:02:41.655 buildtype : debug 00:02:41.655 default_library : shared 00:02:41.655 libdir : lib 00:02:41.655 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.655 b_sanitize : address 00:02:41.655 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:41.655 c_link_args : 00:02:41.655 cpu_instruction_set: native 00:02:41.655 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:41.655 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:41.655 enable_docs : false 00:02:41.655 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.655 enable_kmods : false 00:02:41.655 max_lcores : 128 00:02:41.655 tests : false 00:02:41.655 00:02:41.655 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.223 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.223 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.482 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.482 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.482 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.482 [5/268] Linking static target lib/librte_kvargs.a 00:02:42.482 [6/268] Linking static target lib/librte_log.a 00:02:42.742 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.742 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.742 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.002 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:43.002 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.002 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.002 [13/268] Linking static target lib/librte_telemetry.a 00:02:43.002 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.002 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.002 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.002 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.002 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.261 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.520 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.520 [21/268] Linking target lib/librte_log.so.24.1 00:02:43.520 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.520 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.520 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.520 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.779 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.779 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:43.779 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.779 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.779 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:43.779 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:43.779 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.779 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.051 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.051 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.051 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:44.051 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.051 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.326 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.326 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.326 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.326 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.326 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.326 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.584 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.584 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.584 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.842 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.843 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.843 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.843 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.843 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.101 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.101 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.359 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.359 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.359 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.359 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.359 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.359 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.359 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.618 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.618 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.618 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.618 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.877 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.877 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.136 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.137 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.137 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.137 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.137 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.137 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.137 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.137 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.396 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.396 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:46.396 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:46.396 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:46.655 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.655 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.655 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.655 [83/268] Linking static target lib/librte_ring.a 00:02:46.655 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:46.655 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:46.915 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.915 [87/268] Linking static target lib/librte_eal.a 00:02:46.915 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.915 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.915 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.174 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.174 [92/268] Linking static target lib/librte_mempool.a 00:02:47.174 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.174 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.433 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.433 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.433 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.433 [98/268] Linking static target lib/librte_rcu.a 00:02:47.692 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.692 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.692 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:47.692 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:47.692 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:47.951 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.951 [105/268] Linking static target lib/librte_mbuf.a 00:02:47.951 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.951 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.210 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.210 [109/268] Linking static target lib/librte_net.a 00:02:48.210 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.210 [111/268] Linking static target lib/librte_meter.a 00:02:48.210 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.468 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.468 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.468 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.468 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.468 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.727 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.985 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.985 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.985 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.243 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.502 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.502 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.502 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.502 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.502 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.502 [128/268] Linking static target lib/librte_pci.a 00:02:49.760 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.760 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.760 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.760 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.760 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.760 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:50.019 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.019 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.019 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.019 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.019 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.019 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.019 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.019 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.019 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.019 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.279 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.538 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.538 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.538 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.538 [149/268] Linking static target lib/librte_cmdline.a 00:02:50.538 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:50.538 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.538 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.538 [153/268] Linking static target lib/librte_timer.a 00:02:50.798 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.057 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.057 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.057 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.057 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.057 [159/268] Linking static target lib/librte_compressdev.a 00:02:51.316 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.316 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.316 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.316 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.316 [164/268] Linking static target lib/librte_ethdev.a 00:02:51.316 [165/268] Linking static target lib/librte_hash.a 00:02:51.575 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.575 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.834 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.834 [169/268] Linking static target lib/librte_dmadev.a 00:02:51.834 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.834 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.834 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.094 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.094 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.353 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.353 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.612 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.612 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.612 [179/268] Linking static target lib/librte_cryptodev.a 00:02:52.612 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.612 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.612 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.612 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.612 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.871 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:52.871 [186/268] Linking static target lib/librte_power.a 00:02:53.130 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.130 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.130 [189/268] Linking static target lib/librte_reorder.a 00:02:53.130 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.389 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.389 [192/268] Linking static target lib/librte_security.a 00:02:53.389 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.648 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.648 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.907 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.167 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.167 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.426 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.426 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.426 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.685 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.685 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.685 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.944 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.944 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.203 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.203 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.203 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.203 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.203 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.463 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.463 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.463 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.463 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.463 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:55.463 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.463 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.463 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:55.463 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.463 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.722 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.722 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.722 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.722 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.722 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:55.981 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.358 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.926 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.926 [230/268] Linking target lib/librte_eal.so.24.1 00:02:58.185 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:58.185 [232/268] Linking target lib/librte_pci.so.24.1 00:02:58.185 [233/268] Linking target lib/librte_meter.so.24.1 00:02:58.185 [234/268] Linking target lib/librte_timer.so.24.1 00:02:58.185 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:58.185 [236/268] Linking target lib/librte_ring.so.24.1 00:02:58.185 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:58.443 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:58.443 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:58.443 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:58.443 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:58.443 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:58.443 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:58.443 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:58.443 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:58.702 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:58.702 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:58.702 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:58.702 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:58.702 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:58.702 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:58.961 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:58.961 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:58.961 [254/268] Linking target lib/librte_net.so.24.1 00:02:58.961 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:58.961 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:58.961 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:58.961 [258/268] Linking target lib/librte_security.so.24.1 00:02:58.961 [259/268] Linking target lib/librte_hash.so.24.1 00:02:59.220 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:00.598 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.598 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:00.858 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:00.858 [264/268] Linking target lib/librte_power.so.24.1 00:03:01.427 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:01.427 [266/268] Linking static target lib/librte_vhost.a 00:03:03.963 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.963 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:03.963 INFO: autodetecting backend as ninja 00:03:03.963 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:25.972 CC lib/ut_mock/mock.o 00:03:25.972 CC lib/log/log.o 00:03:25.972 CC lib/ut/ut.o 00:03:25.972 CC lib/log/log_flags.o 00:03:25.972 CC lib/log/log_deprecated.o 00:03:25.972 LIB libspdk_ut_mock.a 00:03:25.972 LIB libspdk_ut.a 00:03:25.972 LIB libspdk_log.a 00:03:25.972 SO libspdk_ut_mock.so.6.0 00:03:25.972 SO libspdk_ut.so.2.0 00:03:25.972 SO libspdk_log.so.7.1 00:03:25.972 SYMLINK libspdk_ut_mock.so 00:03:25.972 SYMLINK libspdk_ut.so 00:03:25.972 SYMLINK libspdk_log.so 00:03:25.972 CC lib/util/base64.o 00:03:25.972 CC lib/util/bit_array.o 00:03:25.972 CC lib/util/crc16.o 00:03:25.972 CC lib/util/cpuset.o 00:03:25.972 CC lib/util/crc32.o 00:03:25.972 CC lib/util/crc32c.o 00:03:25.972 CC lib/ioat/ioat.o 00:03:25.972 CXX lib/trace_parser/trace.o 00:03:25.972 CC lib/dma/dma.o 00:03:25.972 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.972 CC lib/util/crc32_ieee.o 00:03:25.972 CC lib/vfio_user/host/vfio_user.o 00:03:25.972 CC lib/util/crc64.o 00:03:25.972 CC lib/util/dif.o 00:03:25.972 CC lib/util/fd.o 00:03:25.972 CC lib/util/fd_group.o 00:03:25.972 LIB libspdk_dma.a 00:03:25.972 CC lib/util/file.o 00:03:25.972 SO libspdk_dma.so.5.0 00:03:25.972 CC lib/util/hexlify.o 00:03:25.972 LIB libspdk_ioat.a 00:03:25.972 SYMLINK libspdk_dma.so 00:03:25.972 CC lib/util/iov.o 00:03:25.972 SO libspdk_ioat.so.7.0 00:03:25.972 CC lib/util/math.o 00:03:25.972 CC lib/util/net.o 00:03:25.972 LIB libspdk_vfio_user.a 00:03:25.972 SYMLINK libspdk_ioat.so 00:03:25.972 CC lib/util/pipe.o 00:03:25.972 SO libspdk_vfio_user.so.5.0 00:03:25.972 CC lib/util/strerror_tls.o 00:03:25.972 CC lib/util/string.o 00:03:25.972 SYMLINK libspdk_vfio_user.so 00:03:25.972 CC lib/util/uuid.o 00:03:25.972 CC lib/util/xor.o 00:03:25.972 CC lib/util/zipf.o 00:03:25.972 CC lib/util/md5.o 00:03:25.972 LIB libspdk_util.a 00:03:25.972 SO libspdk_util.so.10.1 00:03:25.972 LIB libspdk_trace_parser.a 00:03:25.972 SO libspdk_trace_parser.so.6.0 00:03:25.972 SYMLINK libspdk_util.so 00:03:25.972 SYMLINK libspdk_trace_parser.so 00:03:25.972 CC lib/vmd/vmd.o 00:03:25.972 CC lib/vmd/led.o 00:03:25.972 CC lib/conf/conf.o 00:03:25.972 CC lib/json/json_parse.o 00:03:25.972 CC lib/json/json_write.o 00:03:25.972 CC lib/json/json_util.o 00:03:25.972 CC lib/rdma_utils/rdma_utils.o 00:03:25.972 CC lib/idxd/idxd.o 00:03:25.972 CC lib/idxd/idxd_user.o 00:03:25.972 CC lib/env_dpdk/env.o 00:03:25.972 CC lib/idxd/idxd_kernel.o 00:03:25.972 CC lib/env_dpdk/memory.o 00:03:25.972 LIB libspdk_conf.a 00:03:25.972 CC lib/env_dpdk/pci.o 00:03:25.972 CC lib/env_dpdk/init.o 00:03:25.972 CC lib/env_dpdk/threads.o 00:03:25.972 SO libspdk_conf.so.6.0 00:03:25.972 LIB libspdk_json.a 00:03:25.972 LIB libspdk_rdma_utils.a 00:03:25.972 SO libspdk_json.so.6.0 00:03:25.972 SO libspdk_rdma_utils.so.1.0 00:03:25.972 SYMLINK libspdk_conf.so 00:03:25.972 CC lib/env_dpdk/pci_ioat.o 00:03:25.972 SYMLINK libspdk_rdma_utils.so 00:03:25.972 CC lib/env_dpdk/pci_virtio.o 00:03:25.972 SYMLINK libspdk_json.so 00:03:25.972 CC lib/env_dpdk/pci_vmd.o 00:03:25.972 CC lib/env_dpdk/pci_idxd.o 00:03:25.972 CC lib/env_dpdk/pci_event.o 00:03:25.972 CC lib/rdma_provider/common.o 00:03:25.972 CC lib/env_dpdk/sigbus_handler.o 00:03:25.972 CC lib/env_dpdk/pci_dpdk.o 00:03:25.972 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.972 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.972 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.972 LIB libspdk_idxd.a 00:03:25.972 LIB libspdk_vmd.a 00:03:25.972 SO libspdk_idxd.so.12.1 00:03:25.972 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.972 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.972 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.972 SO libspdk_vmd.so.6.0 00:03:25.972 SYMLINK libspdk_idxd.so 00:03:25.972 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.972 SYMLINK libspdk_vmd.so 00:03:25.972 LIB libspdk_rdma_provider.a 00:03:25.972 SO libspdk_rdma_provider.so.7.0 00:03:25.972 SYMLINK libspdk_rdma_provider.so 00:03:25.972 LIB libspdk_jsonrpc.a 00:03:25.972 SO libspdk_jsonrpc.so.6.0 00:03:25.972 SYMLINK libspdk_jsonrpc.so 00:03:26.542 CC lib/rpc/rpc.o 00:03:26.542 LIB libspdk_env_dpdk.a 00:03:26.803 SO libspdk_env_dpdk.so.15.1 00:03:26.803 LIB libspdk_rpc.a 00:03:26.803 SO libspdk_rpc.so.6.0 00:03:26.803 SYMLINK libspdk_rpc.so 00:03:26.803 SYMLINK libspdk_env_dpdk.so 00:03:27.072 CC lib/notify/notify.o 00:03:27.072 CC lib/notify/notify_rpc.o 00:03:27.072 CC lib/trace/trace.o 00:03:27.072 CC lib/trace/trace_flags.o 00:03:27.072 CC lib/trace/trace_rpc.o 00:03:27.072 CC lib/keyring/keyring_rpc.o 00:03:27.072 CC lib/keyring/keyring.o 00:03:27.346 LIB libspdk_notify.a 00:03:27.346 SO libspdk_notify.so.6.0 00:03:27.346 LIB libspdk_keyring.a 00:03:27.605 LIB libspdk_trace.a 00:03:27.605 SYMLINK libspdk_notify.so 00:03:27.605 SO libspdk_keyring.so.2.0 00:03:27.605 SO libspdk_trace.so.11.0 00:03:27.605 SYMLINK libspdk_keyring.so 00:03:27.605 SYMLINK libspdk_trace.so 00:03:28.174 CC lib/thread/thread.o 00:03:28.174 CC lib/thread/iobuf.o 00:03:28.174 CC lib/sock/sock.o 00:03:28.174 CC lib/sock/sock_rpc.o 00:03:28.434 LIB libspdk_sock.a 00:03:28.434 SO libspdk_sock.so.10.0 00:03:28.693 SYMLINK libspdk_sock.so 00:03:28.952 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.952 CC lib/nvme/nvme_ctrlr.o 00:03:28.952 CC lib/nvme/nvme_pcie_common.o 00:03:28.952 CC lib/nvme/nvme_ns.o 00:03:28.952 CC lib/nvme/nvme_fabric.o 00:03:28.952 CC lib/nvme/nvme_ns_cmd.o 00:03:28.952 CC lib/nvme/nvme_qpair.o 00:03:28.952 CC lib/nvme/nvme_pcie.o 00:03:28.952 CC lib/nvme/nvme.o 00:03:29.520 LIB libspdk_thread.a 00:03:29.779 SO libspdk_thread.so.11.0 00:03:29.779 CC lib/nvme/nvme_quirks.o 00:03:29.779 CC lib/nvme/nvme_transport.o 00:03:29.779 SYMLINK libspdk_thread.so 00:03:29.779 CC lib/nvme/nvme_discovery.o 00:03:30.038 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.038 CC lib/accel/accel.o 00:03:30.038 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.038 CC lib/accel/accel_rpc.o 00:03:30.038 CC lib/nvme/nvme_tcp.o 00:03:30.296 CC lib/nvme/nvme_opal.o 00:03:30.296 CC lib/nvme/nvme_io_msg.o 00:03:30.296 CC lib/nvme/nvme_poll_group.o 00:03:30.296 CC lib/nvme/nvme_zns.o 00:03:30.554 CC lib/nvme/nvme_stubs.o 00:03:30.554 CC lib/nvme/nvme_auth.o 00:03:30.554 CC lib/blob/blobstore.o 00:03:30.813 CC lib/blob/request.o 00:03:30.813 CC lib/nvme/nvme_cuse.o 00:03:30.813 CC lib/blob/zeroes.o 00:03:31.072 CC lib/blob/blob_bs_dev.o 00:03:31.072 CC lib/accel/accel_sw.o 00:03:31.072 CC lib/init/json_config.o 00:03:31.072 CC lib/init/subsystem.o 00:03:31.332 CC lib/init/subsystem_rpc.o 00:03:31.332 CC lib/virtio/virtio.o 00:03:31.332 CC lib/init/rpc.o 00:03:31.332 CC lib/virtio/virtio_vhost_user.o 00:03:31.332 LIB libspdk_accel.a 00:03:31.332 CC lib/virtio/virtio_vfio_user.o 00:03:31.590 SO libspdk_accel.so.16.0 00:03:31.590 LIB libspdk_init.a 00:03:31.590 CC lib/virtio/virtio_pci.o 00:03:31.590 CC lib/nvme/nvme_rdma.o 00:03:31.590 SO libspdk_init.so.6.0 00:03:31.590 SYMLINK libspdk_accel.so 00:03:31.590 SYMLINK libspdk_init.so 00:03:31.849 CC lib/fsdev/fsdev_io.o 00:03:31.849 CC lib/fsdev/fsdev.o 00:03:31.849 CC lib/fsdev/fsdev_rpc.o 00:03:31.849 CC lib/event/app.o 00:03:31.849 CC lib/bdev/bdev_rpc.o 00:03:31.849 CC lib/bdev/bdev_zone.o 00:03:31.849 CC lib/bdev/bdev.o 00:03:31.849 LIB libspdk_virtio.a 00:03:31.849 SO libspdk_virtio.so.7.0 00:03:31.849 CC lib/bdev/part.o 00:03:32.108 SYMLINK libspdk_virtio.so 00:03:32.108 CC lib/bdev/scsi_nvme.o 00:03:32.108 CC lib/event/reactor.o 00:03:32.108 CC lib/event/log_rpc.o 00:03:32.108 CC lib/event/app_rpc.o 00:03:32.108 CC lib/event/scheduler_static.o 00:03:32.674 LIB libspdk_event.a 00:03:32.674 LIB libspdk_fsdev.a 00:03:32.674 SO libspdk_event.so.14.0 00:03:32.674 SO libspdk_fsdev.so.2.0 00:03:32.674 SYMLINK libspdk_event.so 00:03:32.674 SYMLINK libspdk_fsdev.so 00:03:32.932 LIB libspdk_nvme.a 00:03:33.191 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:33.191 SO libspdk_nvme.so.15.0 00:03:33.450 SYMLINK libspdk_nvme.so 00:03:33.708 LIB libspdk_fuse_dispatcher.a 00:03:33.966 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.966 SYMLINK libspdk_fuse_dispatcher.so 00:03:34.532 LIB libspdk_blob.a 00:03:34.532 SO libspdk_blob.so.11.0 00:03:34.532 SYMLINK libspdk_blob.so 00:03:34.790 LIB libspdk_bdev.a 00:03:35.050 CC lib/lvol/lvol.o 00:03:35.050 CC lib/blobfs/blobfs.o 00:03:35.050 CC lib/blobfs/tree.o 00:03:35.050 SO libspdk_bdev.so.17.0 00:03:35.050 SYMLINK libspdk_bdev.so 00:03:35.310 CC lib/nvmf/ctrlr_discovery.o 00:03:35.310 CC lib/nvmf/ctrlr.o 00:03:35.310 CC lib/nvmf/ctrlr_bdev.o 00:03:35.310 CC lib/nvmf/subsystem.o 00:03:35.310 CC lib/ublk/ublk.o 00:03:35.310 CC lib/ftl/ftl_core.o 00:03:35.310 CC lib/nbd/nbd.o 00:03:35.310 CC lib/scsi/dev.o 00:03:35.569 CC lib/scsi/lun.o 00:03:35.828 CC lib/nbd/nbd_rpc.o 00:03:35.828 CC lib/ftl/ftl_init.o 00:03:35.828 CC lib/nvmf/nvmf.o 00:03:35.828 LIB libspdk_nbd.a 00:03:35.828 CC lib/scsi/port.o 00:03:36.088 SO libspdk_nbd.so.7.0 00:03:36.088 LIB libspdk_blobfs.a 00:03:36.088 SO libspdk_blobfs.so.10.0 00:03:36.088 CC lib/ftl/ftl_layout.o 00:03:36.088 SYMLINK libspdk_nbd.so 00:03:36.088 CC lib/nvmf/nvmf_rpc.o 00:03:36.088 CC lib/ublk/ublk_rpc.o 00:03:36.088 SYMLINK libspdk_blobfs.so 00:03:36.088 CC lib/ftl/ftl_debug.o 00:03:36.088 LIB libspdk_lvol.a 00:03:36.088 SO libspdk_lvol.so.10.0 00:03:36.088 CC lib/scsi/scsi.o 00:03:36.088 SYMLINK libspdk_lvol.so 00:03:36.088 CC lib/nvmf/transport.o 00:03:36.088 CC lib/scsi/scsi_bdev.o 00:03:36.348 LIB libspdk_ublk.a 00:03:36.348 SO libspdk_ublk.so.3.0 00:03:36.348 CC lib/ftl/ftl_io.o 00:03:36.348 CC lib/scsi/scsi_pr.o 00:03:36.348 SYMLINK libspdk_ublk.so 00:03:36.349 CC lib/scsi/scsi_rpc.o 00:03:36.349 CC lib/scsi/task.o 00:03:36.608 CC lib/nvmf/tcp.o 00:03:36.608 CC lib/ftl/ftl_sb.o 00:03:36.608 CC lib/ftl/ftl_l2p.o 00:03:36.608 CC lib/nvmf/stubs.o 00:03:36.608 CC lib/nvmf/mdns_server.o 00:03:36.608 CC lib/ftl/ftl_l2p_flat.o 00:03:36.867 LIB libspdk_scsi.a 00:03:36.867 CC lib/nvmf/rdma.o 00:03:36.867 CC lib/ftl/ftl_nv_cache.o 00:03:36.867 SO libspdk_scsi.so.9.0 00:03:36.867 CC lib/nvmf/auth.o 00:03:36.867 SYMLINK libspdk_scsi.so 00:03:36.867 CC lib/ftl/ftl_band.o 00:03:37.126 CC lib/iscsi/conn.o 00:03:37.126 CC lib/iscsi/init_grp.o 00:03:37.126 CC lib/vhost/vhost.o 00:03:37.126 CC lib/iscsi/iscsi.o 00:03:37.384 CC lib/vhost/vhost_rpc.o 00:03:37.384 CC lib/vhost/vhost_scsi.o 00:03:37.384 CC lib/vhost/vhost_blk.o 00:03:37.641 CC lib/iscsi/param.o 00:03:37.899 CC lib/iscsi/portal_grp.o 00:03:37.899 CC lib/vhost/rte_vhost_user.o 00:03:37.899 CC lib/ftl/ftl_band_ops.o 00:03:38.158 CC lib/iscsi/tgt_node.o 00:03:38.158 CC lib/iscsi/iscsi_subsystem.o 00:03:38.158 CC lib/iscsi/iscsi_rpc.o 00:03:38.158 CC lib/ftl/ftl_writer.o 00:03:38.414 CC lib/ftl/ftl_rq.o 00:03:38.414 CC lib/iscsi/task.o 00:03:38.414 CC lib/ftl/ftl_reloc.o 00:03:38.414 CC lib/ftl/ftl_l2p_cache.o 00:03:38.414 CC lib/ftl/ftl_p2l.o 00:03:38.672 CC lib/ftl/ftl_p2l_log.o 00:03:38.672 CC lib/ftl/mngt/ftl_mngt.o 00:03:38.672 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:38.672 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:38.672 LIB libspdk_iscsi.a 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.930 SO libspdk_iscsi.so.8.0 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.930 LIB libspdk_vhost.a 00:03:38.930 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:39.188 SO libspdk_vhost.so.8.0 00:03:39.188 SYMLINK libspdk_iscsi.so 00:03:39.188 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:39.188 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:39.188 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:39.188 CC lib/ftl/utils/ftl_conf.o 00:03:39.188 SYMLINK libspdk_vhost.so 00:03:39.188 CC lib/ftl/utils/ftl_md.o 00:03:39.188 CC lib/ftl/utils/ftl_mempool.o 00:03:39.188 CC lib/ftl/utils/ftl_bitmap.o 00:03:39.188 CC lib/ftl/utils/ftl_property.o 00:03:39.447 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:39.447 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:39.447 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:39.447 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:39.447 LIB libspdk_nvmf.a 00:03:39.447 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:39.447 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:39.447 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:39.705 SO libspdk_nvmf.so.20.0 00:03:39.705 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:39.705 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:39.705 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:39.705 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:39.705 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:39.705 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:39.705 CC lib/ftl/base/ftl_base_dev.o 00:03:39.705 CC lib/ftl/base/ftl_base_bdev.o 00:03:39.705 CC lib/ftl/ftl_trace.o 00:03:39.963 SYMLINK libspdk_nvmf.so 00:03:39.963 LIB libspdk_ftl.a 00:03:40.221 SO libspdk_ftl.so.9.0 00:03:40.480 SYMLINK libspdk_ftl.so 00:03:41.090 CC module/env_dpdk/env_dpdk_rpc.o 00:03:41.090 CC module/blob/bdev/blob_bdev.o 00:03:41.090 CC module/accel/dsa/accel_dsa.o 00:03:41.090 CC module/accel/ioat/accel_ioat.o 00:03:41.090 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:41.090 CC module/keyring/file/keyring.o 00:03:41.090 CC module/sock/posix/posix.o 00:03:41.090 CC module/accel/error/accel_error.o 00:03:41.090 CC module/fsdev/aio/fsdev_aio.o 00:03:41.090 CC module/accel/iaa/accel_iaa.o 00:03:41.090 LIB libspdk_env_dpdk_rpc.a 00:03:41.090 SO libspdk_env_dpdk_rpc.so.6.0 00:03:41.352 SYMLINK libspdk_env_dpdk_rpc.so 00:03:41.352 CC module/keyring/file/keyring_rpc.o 00:03:41.352 CC module/accel/ioat/accel_ioat_rpc.o 00:03:41.352 LIB libspdk_scheduler_dynamic.a 00:03:41.352 CC module/accel/iaa/accel_iaa_rpc.o 00:03:41.352 SO libspdk_scheduler_dynamic.so.4.0 00:03:41.352 CC module/accel/error/accel_error_rpc.o 00:03:41.352 CC module/accel/dsa/accel_dsa_rpc.o 00:03:41.352 LIB libspdk_keyring_file.a 00:03:41.352 LIB libspdk_accel_ioat.a 00:03:41.352 SYMLINK libspdk_scheduler_dynamic.so 00:03:41.352 LIB libspdk_blob_bdev.a 00:03:41.352 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:41.352 SO libspdk_keyring_file.so.2.0 00:03:41.352 SO libspdk_accel_ioat.so.6.0 00:03:41.352 SO libspdk_blob_bdev.so.11.0 00:03:41.352 LIB libspdk_accel_iaa.a 00:03:41.352 SYMLINK libspdk_keyring_file.so 00:03:41.352 SYMLINK libspdk_accel_ioat.so 00:03:41.352 SO libspdk_accel_iaa.so.3.0 00:03:41.352 LIB libspdk_accel_dsa.a 00:03:41.352 LIB libspdk_accel_error.a 00:03:41.352 CC module/fsdev/aio/linux_aio_mgr.o 00:03:41.352 SYMLINK libspdk_blob_bdev.so 00:03:41.611 SO libspdk_accel_error.so.2.0 00:03:41.611 SO libspdk_accel_dsa.so.5.0 00:03:41.611 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:41.611 SYMLINK libspdk_accel_iaa.so 00:03:41.611 SYMLINK libspdk_accel_error.so 00:03:41.611 SYMLINK libspdk_accel_dsa.so 00:03:41.611 CC module/keyring/linux/keyring.o 00:03:41.611 CC module/scheduler/gscheduler/gscheduler.o 00:03:41.611 LIB libspdk_scheduler_dpdk_governor.a 00:03:41.870 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:41.870 CC module/bdev/delay/vbdev_delay.o 00:03:41.870 CC module/bdev/gpt/gpt.o 00:03:41.870 CC module/keyring/linux/keyring_rpc.o 00:03:41.870 CC module/blobfs/bdev/blobfs_bdev.o 00:03:41.870 CC module/bdev/error/vbdev_error.o 00:03:41.870 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:41.870 CC module/bdev/error/vbdev_error_rpc.o 00:03:41.870 LIB libspdk_scheduler_gscheduler.a 00:03:41.870 SO libspdk_scheduler_gscheduler.so.4.0 00:03:41.870 LIB libspdk_fsdev_aio.a 00:03:41.870 CC module/bdev/lvol/vbdev_lvol.o 00:03:41.870 SYMLINK libspdk_scheduler_gscheduler.so 00:03:41.870 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:41.870 LIB libspdk_keyring_linux.a 00:03:41.870 SO libspdk_fsdev_aio.so.1.0 00:03:42.129 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:42.129 SO libspdk_keyring_linux.so.1.0 00:03:42.129 LIB libspdk_sock_posix.a 00:03:42.129 CC module/bdev/gpt/vbdev_gpt.o 00:03:42.129 SO libspdk_sock_posix.so.6.0 00:03:42.129 SYMLINK libspdk_fsdev_aio.so 00:03:42.129 SYMLINK libspdk_keyring_linux.so 00:03:42.129 LIB libspdk_bdev_error.a 00:03:42.129 SYMLINK libspdk_sock_posix.so 00:03:42.129 SO libspdk_bdev_error.so.6.0 00:03:42.129 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:42.129 LIB libspdk_blobfs_bdev.a 00:03:42.129 CC module/bdev/malloc/bdev_malloc.o 00:03:42.129 SO libspdk_blobfs_bdev.so.6.0 00:03:42.129 LIB libspdk_bdev_delay.a 00:03:42.129 SYMLINK libspdk_bdev_error.so 00:03:42.129 CC module/bdev/null/bdev_null.o 00:03:42.129 CC module/bdev/nvme/bdev_nvme.o 00:03:42.129 CC module/bdev/null/bdev_null_rpc.o 00:03:42.388 SO libspdk_bdev_delay.so.6.0 00:03:42.388 SYMLINK libspdk_blobfs_bdev.so 00:03:42.388 LIB libspdk_bdev_gpt.a 00:03:42.388 CC module/bdev/passthru/vbdev_passthru.o 00:03:42.388 SYMLINK libspdk_bdev_delay.so 00:03:42.388 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:42.388 SO libspdk_bdev_gpt.so.6.0 00:03:42.388 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:42.388 SYMLINK libspdk_bdev_gpt.so 00:03:42.388 CC module/bdev/raid/bdev_raid.o 00:03:42.388 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:42.388 CC module/bdev/nvme/nvme_rpc.o 00:03:42.646 LIB libspdk_bdev_null.a 00:03:42.646 CC module/bdev/split/vbdev_split.o 00:03:42.646 SO libspdk_bdev_null.so.6.0 00:03:42.646 LIB libspdk_bdev_lvol.a 00:03:42.646 SO libspdk_bdev_lvol.so.6.0 00:03:42.646 CC module/bdev/split/vbdev_split_rpc.o 00:03:42.646 SYMLINK libspdk_bdev_null.so 00:03:42.646 LIB libspdk_bdev_malloc.a 00:03:42.646 LIB libspdk_bdev_passthru.a 00:03:42.646 SYMLINK libspdk_bdev_lvol.so 00:03:42.646 SO libspdk_bdev_passthru.so.6.0 00:03:42.646 SO libspdk_bdev_malloc.so.6.0 00:03:42.646 SYMLINK libspdk_bdev_passthru.so 00:03:42.646 SYMLINK libspdk_bdev_malloc.so 00:03:42.646 CC module/bdev/nvme/bdev_mdns_client.o 00:03:42.905 CC module/bdev/nvme/vbdev_opal.o 00:03:42.906 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:42.906 LIB libspdk_bdev_split.a 00:03:42.906 CC module/bdev/aio/bdev_aio.o 00:03:42.906 SO libspdk_bdev_split.so.6.0 00:03:42.906 CC module/bdev/ftl/bdev_ftl.o 00:03:42.906 SYMLINK libspdk_bdev_split.so 00:03:42.906 CC module/bdev/aio/bdev_aio_rpc.o 00:03:42.906 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:42.906 CC module/bdev/iscsi/bdev_iscsi.o 00:03:43.164 CC module/bdev/raid/bdev_raid_rpc.o 00:03:43.164 CC module/bdev/raid/bdev_raid_sb.o 00:03:43.164 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:43.165 LIB libspdk_bdev_zone_block.a 00:03:43.165 SO libspdk_bdev_zone_block.so.6.0 00:03:43.165 LIB libspdk_bdev_aio.a 00:03:43.165 SO libspdk_bdev_aio.so.6.0 00:03:43.165 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:43.165 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:43.165 SYMLINK libspdk_bdev_zone_block.so 00:03:43.165 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:43.165 CC module/bdev/raid/raid0.o 00:03:43.165 SYMLINK libspdk_bdev_aio.so 00:03:43.165 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:43.423 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:43.423 CC module/bdev/raid/raid1.o 00:03:43.423 CC module/bdev/raid/concat.o 00:03:43.423 LIB libspdk_bdev_ftl.a 00:03:43.423 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:43.423 SO libspdk_bdev_ftl.so.6.0 00:03:43.423 LIB libspdk_bdev_iscsi.a 00:03:43.681 SYMLINK libspdk_bdev_ftl.so 00:03:43.681 CC module/bdev/raid/raid5f.o 00:03:43.681 SO libspdk_bdev_iscsi.so.6.0 00:03:43.681 SYMLINK libspdk_bdev_iscsi.so 00:03:43.938 LIB libspdk_bdev_virtio.a 00:03:43.938 SO libspdk_bdev_virtio.so.6.0 00:03:43.938 SYMLINK libspdk_bdev_virtio.so 00:03:44.198 LIB libspdk_bdev_raid.a 00:03:44.198 SO libspdk_bdev_raid.so.6.0 00:03:44.198 SYMLINK libspdk_bdev_raid.so 00:03:45.134 LIB libspdk_bdev_nvme.a 00:03:45.394 SO libspdk_bdev_nvme.so.7.1 00:03:45.394 SYMLINK libspdk_bdev_nvme.so 00:03:46.019 CC module/event/subsystems/keyring/keyring.o 00:03:46.019 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.019 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.019 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.019 CC module/event/subsystems/vmd/vmd.o 00:03:46.019 CC module/event/subsystems/sock/sock.o 00:03:46.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.019 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.019 CC module/event/subsystems/fsdev/fsdev.o 00:03:46.278 LIB libspdk_event_sock.a 00:03:46.278 LIB libspdk_event_keyring.a 00:03:46.278 LIB libspdk_event_scheduler.a 00:03:46.278 LIB libspdk_event_vhost_blk.a 00:03:46.278 LIB libspdk_event_vmd.a 00:03:46.278 LIB libspdk_event_fsdev.a 00:03:46.278 LIB libspdk_event_iobuf.a 00:03:46.278 SO libspdk_event_sock.so.5.0 00:03:46.278 SO libspdk_event_keyring.so.1.0 00:03:46.278 SO libspdk_event_scheduler.so.4.0 00:03:46.278 SO libspdk_event_vhost_blk.so.3.0 00:03:46.278 SO libspdk_event_vmd.so.6.0 00:03:46.278 SO libspdk_event_fsdev.so.1.0 00:03:46.278 SO libspdk_event_iobuf.so.3.0 00:03:46.278 SYMLINK libspdk_event_sock.so 00:03:46.278 SYMLINK libspdk_event_keyring.so 00:03:46.278 SYMLINK libspdk_event_scheduler.so 00:03:46.278 SYMLINK libspdk_event_vhost_blk.so 00:03:46.278 SYMLINK libspdk_event_fsdev.so 00:03:46.278 SYMLINK libspdk_event_vmd.so 00:03:46.278 SYMLINK libspdk_event_iobuf.so 00:03:46.846 CC module/event/subsystems/accel/accel.o 00:03:46.846 LIB libspdk_event_accel.a 00:03:46.846 SO libspdk_event_accel.so.6.0 00:03:47.106 SYMLINK libspdk_event_accel.so 00:03:47.366 CC module/event/subsystems/bdev/bdev.o 00:03:47.626 LIB libspdk_event_bdev.a 00:03:47.626 SO libspdk_event_bdev.so.6.0 00:03:47.626 SYMLINK libspdk_event_bdev.so 00:03:48.194 CC module/event/subsystems/scsi/scsi.o 00:03:48.194 CC module/event/subsystems/nbd/nbd.o 00:03:48.194 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:48.194 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:48.194 CC module/event/subsystems/ublk/ublk.o 00:03:48.194 LIB libspdk_event_nbd.a 00:03:48.194 LIB libspdk_event_ublk.a 00:03:48.194 LIB libspdk_event_scsi.a 00:03:48.194 SO libspdk_event_nbd.so.6.0 00:03:48.194 SO libspdk_event_ublk.so.3.0 00:03:48.194 SO libspdk_event_scsi.so.6.0 00:03:48.194 SYMLINK libspdk_event_nbd.so 00:03:48.194 LIB libspdk_event_nvmf.a 00:03:48.194 SYMLINK libspdk_event_ublk.so 00:03:48.194 SYMLINK libspdk_event_scsi.so 00:03:48.194 SO libspdk_event_nvmf.so.6.0 00:03:48.453 SYMLINK libspdk_event_nvmf.so 00:03:48.712 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.712 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.712 LIB libspdk_event_vhost_scsi.a 00:03:48.972 LIB libspdk_event_iscsi.a 00:03:48.972 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.972 SO libspdk_event_iscsi.so.6.0 00:03:48.972 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.972 SYMLINK libspdk_event_iscsi.so 00:03:49.232 SO libspdk.so.6.0 00:03:49.232 SYMLINK libspdk.so 00:03:49.491 CC app/spdk_lspci/spdk_lspci.o 00:03:49.491 CC app/spdk_nvme_perf/perf.o 00:03:49.491 CXX app/trace/trace.o 00:03:49.491 CC app/trace_record/trace_record.o 00:03:49.491 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.491 CC app/nvmf_tgt/nvmf_main.o 00:03:49.491 CC app/spdk_tgt/spdk_tgt.o 00:03:49.491 CC examples/ioat/perf/perf.o 00:03:49.491 CC examples/util/zipf/zipf.o 00:03:49.491 CC test/thread/poller_perf/poller_perf.o 00:03:49.491 LINK spdk_lspci 00:03:49.749 LINK nvmf_tgt 00:03:49.749 LINK zipf 00:03:49.749 LINK iscsi_tgt 00:03:49.749 LINK spdk_tgt 00:03:49.749 LINK poller_perf 00:03:49.749 LINK spdk_trace_record 00:03:49.749 LINK ioat_perf 00:03:49.749 LINK spdk_trace 00:03:50.008 CC app/spdk_nvme_identify/identify.o 00:03:50.008 CC app/spdk_nvme_discover/discovery_aer.o 00:03:50.008 CC app/spdk_top/spdk_top.o 00:03:50.008 CC examples/ioat/verify/verify.o 00:03:50.008 CC app/spdk_dd/spdk_dd.o 00:03:50.008 TEST_HEADER include/spdk/accel.h 00:03:50.008 TEST_HEADER include/spdk/accel_module.h 00:03:50.008 TEST_HEADER include/spdk/assert.h 00:03:50.008 TEST_HEADER include/spdk/barrier.h 00:03:50.008 TEST_HEADER include/spdk/base64.h 00:03:50.008 TEST_HEADER include/spdk/bdev.h 00:03:50.008 TEST_HEADER include/spdk/bdev_module.h 00:03:50.008 TEST_HEADER include/spdk/bdev_zone.h 00:03:50.008 TEST_HEADER include/spdk/bit_array.h 00:03:50.008 TEST_HEADER include/spdk/bit_pool.h 00:03:50.008 TEST_HEADER include/spdk/blob_bdev.h 00:03:50.008 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:50.008 TEST_HEADER include/spdk/blobfs.h 00:03:50.008 TEST_HEADER include/spdk/blob.h 00:03:50.008 TEST_HEADER include/spdk/conf.h 00:03:50.008 TEST_HEADER include/spdk/config.h 00:03:50.009 TEST_HEADER include/spdk/cpuset.h 00:03:50.009 TEST_HEADER include/spdk/crc16.h 00:03:50.009 TEST_HEADER include/spdk/crc32.h 00:03:50.009 TEST_HEADER include/spdk/crc64.h 00:03:50.009 TEST_HEADER include/spdk/dif.h 00:03:50.009 TEST_HEADER include/spdk/dma.h 00:03:50.009 TEST_HEADER include/spdk/endian.h 00:03:50.009 TEST_HEADER include/spdk/env_dpdk.h 00:03:50.009 TEST_HEADER include/spdk/env.h 00:03:50.009 TEST_HEADER include/spdk/event.h 00:03:50.009 TEST_HEADER include/spdk/fd_group.h 00:03:50.009 TEST_HEADER include/spdk/fd.h 00:03:50.009 TEST_HEADER include/spdk/file.h 00:03:50.009 TEST_HEADER include/spdk/fsdev.h 00:03:50.009 TEST_HEADER include/spdk/fsdev_module.h 00:03:50.009 TEST_HEADER include/spdk/ftl.h 00:03:50.009 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:50.009 TEST_HEADER include/spdk/gpt_spec.h 00:03:50.009 TEST_HEADER include/spdk/hexlify.h 00:03:50.009 TEST_HEADER include/spdk/histogram_data.h 00:03:50.009 TEST_HEADER include/spdk/idxd.h 00:03:50.009 CC test/dma/test_dma/test_dma.o 00:03:50.009 TEST_HEADER include/spdk/idxd_spec.h 00:03:50.009 TEST_HEADER include/spdk/init.h 00:03:50.009 TEST_HEADER include/spdk/ioat.h 00:03:50.009 TEST_HEADER include/spdk/ioat_spec.h 00:03:50.009 TEST_HEADER include/spdk/iscsi_spec.h 00:03:50.009 TEST_HEADER include/spdk/json.h 00:03:50.009 TEST_HEADER include/spdk/jsonrpc.h 00:03:50.009 LINK spdk_nvme_discover 00:03:50.009 TEST_HEADER include/spdk/keyring.h 00:03:50.009 TEST_HEADER include/spdk/keyring_module.h 00:03:50.268 TEST_HEADER include/spdk/likely.h 00:03:50.268 TEST_HEADER include/spdk/log.h 00:03:50.268 TEST_HEADER include/spdk/lvol.h 00:03:50.268 TEST_HEADER include/spdk/md5.h 00:03:50.268 TEST_HEADER include/spdk/memory.h 00:03:50.268 TEST_HEADER include/spdk/mmio.h 00:03:50.268 CC test/app/bdev_svc/bdev_svc.o 00:03:50.268 TEST_HEADER include/spdk/nbd.h 00:03:50.268 TEST_HEADER include/spdk/net.h 00:03:50.268 TEST_HEADER include/spdk/notify.h 00:03:50.268 TEST_HEADER include/spdk/nvme.h 00:03:50.268 TEST_HEADER include/spdk/nvme_intel.h 00:03:50.268 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:50.268 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:50.268 TEST_HEADER include/spdk/nvme_spec.h 00:03:50.268 TEST_HEADER include/spdk/nvme_zns.h 00:03:50.268 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:50.268 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:50.268 TEST_HEADER include/spdk/nvmf.h 00:03:50.268 TEST_HEADER include/spdk/nvmf_spec.h 00:03:50.268 TEST_HEADER include/spdk/nvmf_transport.h 00:03:50.268 TEST_HEADER include/spdk/opal.h 00:03:50.268 TEST_HEADER include/spdk/opal_spec.h 00:03:50.268 TEST_HEADER include/spdk/pci_ids.h 00:03:50.268 TEST_HEADER include/spdk/pipe.h 00:03:50.268 TEST_HEADER include/spdk/queue.h 00:03:50.268 TEST_HEADER include/spdk/reduce.h 00:03:50.268 TEST_HEADER include/spdk/rpc.h 00:03:50.268 TEST_HEADER include/spdk/scheduler.h 00:03:50.268 TEST_HEADER include/spdk/scsi.h 00:03:50.268 TEST_HEADER include/spdk/scsi_spec.h 00:03:50.268 TEST_HEADER include/spdk/sock.h 00:03:50.268 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:50.268 TEST_HEADER include/spdk/stdinc.h 00:03:50.268 TEST_HEADER include/spdk/string.h 00:03:50.268 LINK verify 00:03:50.268 TEST_HEADER include/spdk/thread.h 00:03:50.268 TEST_HEADER include/spdk/trace.h 00:03:50.268 TEST_HEADER include/spdk/trace_parser.h 00:03:50.268 TEST_HEADER include/spdk/tree.h 00:03:50.268 TEST_HEADER include/spdk/ublk.h 00:03:50.268 TEST_HEADER include/spdk/util.h 00:03:50.268 TEST_HEADER include/spdk/uuid.h 00:03:50.268 TEST_HEADER include/spdk/version.h 00:03:50.268 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:50.268 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:50.268 TEST_HEADER include/spdk/vhost.h 00:03:50.268 TEST_HEADER include/spdk/vmd.h 00:03:50.268 TEST_HEADER include/spdk/xor.h 00:03:50.268 TEST_HEADER include/spdk/zipf.h 00:03:50.268 CXX test/cpp_headers/accel.o 00:03:50.268 LINK bdev_svc 00:03:50.527 LINK spdk_nvme_perf 00:03:50.527 CXX test/cpp_headers/accel_module.o 00:03:50.527 LINK spdk_dd 00:03:50.527 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:50.527 CC app/fio/nvme/fio_plugin.o 00:03:50.527 CXX test/cpp_headers/assert.o 00:03:50.786 LINK interrupt_tgt 00:03:50.786 CC examples/thread/thread/thread_ex.o 00:03:50.786 LINK test_dma 00:03:50.786 LINK nvme_fuzz 00:03:50.786 CC app/fio/bdev/fio_plugin.o 00:03:50.786 CXX test/cpp_headers/barrier.o 00:03:50.786 CC examples/sock/hello_world/hello_sock.o 00:03:50.786 LINK spdk_nvme_identify 00:03:50.786 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:50.786 CXX test/cpp_headers/base64.o 00:03:51.045 CXX test/cpp_headers/bdev.o 00:03:51.045 LINK thread 00:03:51.045 LINK spdk_top 00:03:51.045 LINK hello_sock 00:03:51.045 LINK spdk_nvme 00:03:51.045 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.045 CXX test/cpp_headers/bdev_module.o 00:03:51.305 CC test/env/vtophys/vtophys.o 00:03:51.305 CXX test/cpp_headers/bdev_zone.o 00:03:51.305 CC app/vhost/vhost.o 00:03:51.305 CC test/event/event_perf/event_perf.o 00:03:51.305 LINK spdk_bdev 00:03:51.305 CC test/nvme/aer/aer.o 00:03:51.305 LINK vtophys 00:03:51.305 CXX test/cpp_headers/bit_array.o 00:03:51.305 LINK vhost 00:03:51.305 CC test/nvme/reset/reset.o 00:03:51.305 LINK event_perf 00:03:51.564 CC examples/vmd/lsvmd/lsvmd.o 00:03:51.564 CC test/nvme/sgl/sgl.o 00:03:51.564 CXX test/cpp_headers/bit_pool.o 00:03:51.564 CC test/rpc_client/rpc_client_test.o 00:03:51.564 LINK lsvmd 00:03:51.564 LINK mem_callbacks 00:03:51.564 LINK aer 00:03:51.564 CC test/event/reactor/reactor.o 00:03:51.564 CXX test/cpp_headers/blob_bdev.o 00:03:51.564 LINK reset 00:03:51.823 LINK rpc_client_test 00:03:51.823 LINK reactor 00:03:51.823 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.823 LINK sgl 00:03:51.823 CC test/accel/dif/dif.o 00:03:51.823 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.823 CC examples/vmd/led/led.o 00:03:52.082 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.082 LINK env_dpdk_post_init 00:03:52.082 CC test/event/reactor_perf/reactor_perf.o 00:03:52.082 CXX test/cpp_headers/blobfs.o 00:03:52.082 CC test/blobfs/mkfs/mkfs.o 00:03:52.082 LINK led 00:03:52.082 CC test/nvme/e2edp/nvme_dp.o 00:03:52.082 CC test/lvol/esnap/esnap.o 00:03:52.082 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.341 LINK reactor_perf 00:03:52.341 CXX test/cpp_headers/blob.o 00:03:52.341 CC test/env/memory/memory_ut.o 00:03:52.341 LINK mkfs 00:03:52.341 CXX test/cpp_headers/conf.o 00:03:52.341 LINK nvme_dp 00:03:52.341 CC examples/idxd/perf/perf.o 00:03:52.602 CC test/event/app_repeat/app_repeat.o 00:03:52.602 CXX test/cpp_headers/config.o 00:03:52.602 LINK dif 00:03:52.602 CXX test/cpp_headers/cpuset.o 00:03:52.602 CC test/app/histogram_perf/histogram_perf.o 00:03:52.602 LINK vhost_fuzz 00:03:52.602 LINK app_repeat 00:03:52.602 CC test/nvme/overhead/overhead.o 00:03:52.862 LINK histogram_perf 00:03:52.862 CXX test/cpp_headers/crc16.o 00:03:52.862 LINK idxd_perf 00:03:52.862 LINK iscsi_fuzz 00:03:52.862 CC test/nvme/err_injection/err_injection.o 00:03:52.862 CXX test/cpp_headers/crc32.o 00:03:52.862 CC test/nvme/startup/startup.o 00:03:52.862 CC test/event/scheduler/scheduler.o 00:03:53.123 LINK overhead 00:03:53.123 CC test/nvme/reserve/reserve.o 00:03:53.123 CXX test/cpp_headers/crc64.o 00:03:53.123 LINK err_injection 00:03:53.123 LINK startup 00:03:53.123 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:53.123 CC test/app/jsoncat/jsoncat.o 00:03:53.123 LINK reserve 00:03:53.123 LINK scheduler 00:03:53.384 CXX test/cpp_headers/dif.o 00:03:53.384 CC test/env/pci/pci_ut.o 00:03:53.384 LINK jsoncat 00:03:53.384 CC test/nvme/simple_copy/simple_copy.o 00:03:53.384 CC test/nvme/connect_stress/connect_stress.o 00:03:53.384 CXX test/cpp_headers/dma.o 00:03:53.384 LINK hello_fsdev 00:03:53.644 LINK memory_ut 00:03:53.644 CXX test/cpp_headers/endian.o 00:03:53.644 CC examples/accel/perf/accel_perf.o 00:03:53.644 LINK connect_stress 00:03:53.644 CC test/app/stub/stub.o 00:03:53.644 LINK simple_copy 00:03:53.644 CC test/bdev/bdevio/bdevio.o 00:03:53.644 CXX test/cpp_headers/env_dpdk.o 00:03:53.644 LINK pci_ut 00:03:53.904 LINK stub 00:03:53.904 CC examples/blob/hello_world/hello_blob.o 00:03:53.904 CC test/nvme/boot_partition/boot_partition.o 00:03:53.904 CXX test/cpp_headers/env.o 00:03:53.904 CC examples/blob/cli/blobcli.o 00:03:53.904 CC test/nvme/compliance/nvme_compliance.o 00:03:54.163 CXX test/cpp_headers/event.o 00:03:54.163 LINK boot_partition 00:03:54.163 LINK bdevio 00:03:54.163 LINK hello_blob 00:03:54.163 CC examples/nvme/hello_world/hello_world.o 00:03:54.163 CC test/nvme/fused_ordering/fused_ordering.o 00:03:54.163 LINK accel_perf 00:03:54.163 CXX test/cpp_headers/fd_group.o 00:03:54.422 LINK nvme_compliance 00:03:54.422 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.422 CXX test/cpp_headers/fd.o 00:03:54.422 LINK fused_ordering 00:03:54.422 LINK hello_world 00:03:54.422 CC test/nvme/fdp/fdp.o 00:03:54.422 CC examples/nvme/reconnect/reconnect.o 00:03:54.422 LINK blobcli 00:03:54.682 CXX test/cpp_headers/file.o 00:03:54.682 LINK doorbell_aers 00:03:54.682 CC examples/bdev/hello_world/hello_bdev.o 00:03:54.682 CC examples/bdev/bdevperf/bdevperf.o 00:03:54.682 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.682 CC test/nvme/cuse/cuse.o 00:03:54.682 CXX test/cpp_headers/fsdev.o 00:03:54.942 CC examples/nvme/arbitration/arbitration.o 00:03:54.942 LINK fdp 00:03:54.942 LINK hello_bdev 00:03:54.942 CC examples/nvme/hotplug/hotplug.o 00:03:54.942 LINK reconnect 00:03:54.942 CXX test/cpp_headers/fsdev_module.o 00:03:55.251 CXX test/cpp_headers/ftl.o 00:03:55.251 LINK hotplug 00:03:55.251 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:55.251 CC examples/nvme/abort/abort.o 00:03:55.251 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.251 LINK arbitration 00:03:55.251 CXX test/cpp_headers/fuse_dispatcher.o 00:03:55.251 LINK nvme_manage 00:03:55.251 CXX test/cpp_headers/gpt_spec.o 00:03:55.251 LINK cmb_copy 00:03:55.251 LINK pmr_persistence 00:03:55.251 CXX test/cpp_headers/hexlify.o 00:03:55.510 CXX test/cpp_headers/histogram_data.o 00:03:55.510 CXX test/cpp_headers/idxd.o 00:03:55.510 CXX test/cpp_headers/idxd_spec.o 00:03:55.510 CXX test/cpp_headers/init.o 00:03:55.510 CXX test/cpp_headers/ioat.o 00:03:55.510 CXX test/cpp_headers/ioat_spec.o 00:03:55.510 LINK abort 00:03:55.510 CXX test/cpp_headers/iscsi_spec.o 00:03:55.510 LINK bdevperf 00:03:55.770 CXX test/cpp_headers/json.o 00:03:55.770 CXX test/cpp_headers/jsonrpc.o 00:03:55.770 CXX test/cpp_headers/keyring.o 00:03:55.770 CXX test/cpp_headers/keyring_module.o 00:03:55.770 CXX test/cpp_headers/likely.o 00:03:55.770 CXX test/cpp_headers/log.o 00:03:55.770 CXX test/cpp_headers/lvol.o 00:03:55.770 CXX test/cpp_headers/md5.o 00:03:55.770 CXX test/cpp_headers/memory.o 00:03:55.770 CXX test/cpp_headers/mmio.o 00:03:55.770 CXX test/cpp_headers/nbd.o 00:03:55.770 CXX test/cpp_headers/net.o 00:03:55.770 CXX test/cpp_headers/notify.o 00:03:56.030 CXX test/cpp_headers/nvme.o 00:03:56.030 CXX test/cpp_headers/nvme_intel.o 00:03:56.030 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.030 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.030 CXX test/cpp_headers/nvme_spec.o 00:03:56.030 CXX test/cpp_headers/nvme_zns.o 00:03:56.030 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.030 CC examples/nvmf/nvmf/nvmf.o 00:03:56.030 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.030 CXX test/cpp_headers/nvmf.o 00:03:56.030 CXX test/cpp_headers/nvmf_spec.o 00:03:56.030 CXX test/cpp_headers/nvmf_transport.o 00:03:56.290 LINK cuse 00:03:56.290 CXX test/cpp_headers/opal.o 00:03:56.290 CXX test/cpp_headers/opal_spec.o 00:03:56.290 CXX test/cpp_headers/pci_ids.o 00:03:56.290 CXX test/cpp_headers/pipe.o 00:03:56.290 CXX test/cpp_headers/queue.o 00:03:56.290 CXX test/cpp_headers/reduce.o 00:03:56.290 CXX test/cpp_headers/rpc.o 00:03:56.290 CXX test/cpp_headers/scheduler.o 00:03:56.290 LINK nvmf 00:03:56.290 CXX test/cpp_headers/scsi.o 00:03:56.551 CXX test/cpp_headers/scsi_spec.o 00:03:56.551 CXX test/cpp_headers/sock.o 00:03:56.551 CXX test/cpp_headers/stdinc.o 00:03:56.551 CXX test/cpp_headers/string.o 00:03:56.551 CXX test/cpp_headers/thread.o 00:03:56.551 CXX test/cpp_headers/trace.o 00:03:56.551 CXX test/cpp_headers/trace_parser.o 00:03:56.551 CXX test/cpp_headers/tree.o 00:03:56.551 CXX test/cpp_headers/ublk.o 00:03:56.551 CXX test/cpp_headers/util.o 00:03:56.551 CXX test/cpp_headers/uuid.o 00:03:56.551 CXX test/cpp_headers/version.o 00:03:56.551 CXX test/cpp_headers/vfio_user_pci.o 00:03:56.551 CXX test/cpp_headers/vfio_user_spec.o 00:03:56.551 CXX test/cpp_headers/vhost.o 00:03:56.551 CXX test/cpp_headers/vmd.o 00:03:56.551 CXX test/cpp_headers/xor.o 00:03:56.810 CXX test/cpp_headers/zipf.o 00:03:58.718 LINK esnap 00:03:58.977 00:03:58.977 real 1m28.885s 00:03:58.977 user 7m56.101s 00:03:58.977 sys 1m44.158s 00:03:58.977 13:20:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:58.977 13:20:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:58.977 ************************************ 00:03:58.977 END TEST make 00:03:58.977 ************************************ 00:03:58.977 13:20:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:58.977 13:20:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:58.977 13:20:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:58.977 13:20:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:58.977 13:20:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:58.977 13:20:28 -- pm/common@44 -- $ pid=5460 00:03:58.977 13:20:28 -- pm/common@50 -- $ kill -TERM 5460 00:03:58.977 13:20:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:58.977 13:20:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:58.977 13:20:28 -- pm/common@44 -- $ pid=5462 00:03:58.977 13:20:28 -- pm/common@50 -- $ kill -TERM 5462 00:03:58.977 13:20:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:58.977 13:20:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:58.977 13:20:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.977 13:20:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.977 13:20:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.235 13:20:29 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.235 13:20:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.235 13:20:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.235 13:20:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.235 13:20:29 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.235 13:20:29 -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.235 13:20:29 -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.235 13:20:29 -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.235 13:20:29 -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.235 13:20:29 -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.235 13:20:29 -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.235 13:20:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.235 13:20:29 -- scripts/common.sh@344 -- # case "$op" in 00:03:59.235 13:20:29 -- scripts/common.sh@345 -- # : 1 00:03:59.235 13:20:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.235 13:20:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.235 13:20:29 -- scripts/common.sh@365 -- # decimal 1 00:03:59.235 13:20:29 -- scripts/common.sh@353 -- # local d=1 00:03:59.235 13:20:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.235 13:20:29 -- scripts/common.sh@355 -- # echo 1 00:03:59.235 13:20:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.235 13:20:29 -- scripts/common.sh@366 -- # decimal 2 00:03:59.235 13:20:29 -- scripts/common.sh@353 -- # local d=2 00:03:59.235 13:20:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.235 13:20:29 -- scripts/common.sh@355 -- # echo 2 00:03:59.236 13:20:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.236 13:20:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.236 13:20:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.236 13:20:29 -- scripts/common.sh@368 -- # return 0 00:03:59.236 13:20:29 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.236 13:20:29 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.236 --rc genhtml_branch_coverage=1 00:03:59.236 --rc genhtml_function_coverage=1 00:03:59.236 --rc genhtml_legend=1 00:03:59.236 --rc geninfo_all_blocks=1 00:03:59.236 --rc geninfo_unexecuted_blocks=1 00:03:59.236 00:03:59.236 ' 00:03:59.236 13:20:29 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.236 --rc genhtml_branch_coverage=1 00:03:59.236 --rc genhtml_function_coverage=1 00:03:59.236 --rc genhtml_legend=1 00:03:59.236 --rc geninfo_all_blocks=1 00:03:59.236 --rc geninfo_unexecuted_blocks=1 00:03:59.236 00:03:59.236 ' 00:03:59.236 13:20:29 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.236 --rc genhtml_branch_coverage=1 00:03:59.236 --rc genhtml_function_coverage=1 00:03:59.236 --rc genhtml_legend=1 00:03:59.236 --rc geninfo_all_blocks=1 00:03:59.236 --rc geninfo_unexecuted_blocks=1 00:03:59.236 00:03:59.236 ' 00:03:59.236 13:20:29 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.236 --rc genhtml_branch_coverage=1 00:03:59.236 --rc genhtml_function_coverage=1 00:03:59.236 --rc genhtml_legend=1 00:03:59.236 --rc geninfo_all_blocks=1 00:03:59.236 --rc geninfo_unexecuted_blocks=1 00:03:59.236 00:03:59.236 ' 00:03:59.236 13:20:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.236 13:20:29 -- nvmf/common.sh@7 -- # uname -s 00:03:59.236 13:20:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.236 13:20:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.236 13:20:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.236 13:20:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.236 13:20:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.236 13:20:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.236 13:20:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.236 13:20:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.236 13:20:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.236 13:20:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.236 13:20:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:60483ee9-3997-4c54-a57b-28075c2968f2 00:03:59.236 13:20:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=60483ee9-3997-4c54-a57b-28075c2968f2 00:03:59.236 13:20:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.236 13:20:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.236 13:20:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.236 13:20:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.236 13:20:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.236 13:20:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.236 13:20:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.236 13:20:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.236 13:20:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.236 13:20:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.236 13:20:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.236 13:20:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.236 13:20:29 -- paths/export.sh@5 -- # export PATH 00:03:59.236 13:20:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.236 13:20:29 -- nvmf/common.sh@51 -- # : 0 00:03:59.236 13:20:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.236 13:20:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.236 13:20:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.236 13:20:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.236 13:20:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.236 13:20:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.236 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.236 13:20:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.236 13:20:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.236 13:20:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.236 13:20:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.236 13:20:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.236 13:20:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.236 13:20:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.236 13:20:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.236 13:20:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.236 13:20:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.236 13:20:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.236 13:20:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.236 13:20:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.236 13:20:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.236 13:20:29 -- spdk/autotest.sh@48 -- # udevadm_pid=54490 00:03:59.236 13:20:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.236 13:20:29 -- pm/common@17 -- # local monitor 00:03:59.236 13:20:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.236 13:20:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.236 13:20:29 -- pm/common@21 -- # date +%s 00:03:59.236 13:20:29 -- pm/common@25 -- # sleep 1 00:03:59.236 13:20:29 -- pm/common@21 -- # date +%s 00:03:59.236 13:20:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731936029 00:03:59.236 13:20:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731936029 00:03:59.236 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731936029_collect-cpu-load.pm.log 00:03:59.236 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731936029_collect-vmstat.pm.log 00:04:00.174 13:20:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.174 13:20:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.174 13:20:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.174 13:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.174 13:20:30 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.174 13:20:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:00.174 13:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:00.433 13:20:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:00.433 13:20:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:00.433 13:20:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:00.433 13:20:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:00.433 13:20:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:00.433 13:20:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.433 13:20:30 -- common/autotest_common.sh@1457 -- # uname 00:04:00.433 13:20:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:00.433 13:20:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.434 13:20:30 -- common/autotest_common.sh@1477 -- # uname 00:04:00.434 13:20:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:00.434 13:20:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:00.434 13:20:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:00.434 lcov: LCOV version 1.15 00:04:00.434 13:20:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:15.369 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:15.369 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:30.255 13:20:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:30.255 13:20:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.255 13:20:59 -- common/autotest_common.sh@10 -- # set +x 00:04:30.255 13:20:59 -- spdk/autotest.sh@78 -- # rm -f 00:04:30.255 13:20:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.515 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:30.775 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:30.775 13:21:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:30.775 13:21:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:30.775 13:21:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:30.775 13:21:00 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:30.775 13:21:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.775 13:21:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:30.775 13:21:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:30.775 13:21:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.775 13:21:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:30.775 13:21:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:30.775 13:21:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.775 13:21:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:30.775 13:21:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:30.775 13:21:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.775 13:21:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:30.775 13:21:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:30.775 13:21:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:30.775 13:21:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.775 13:21:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:30.775 13:21:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.775 13:21:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.775 13:21:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:30.775 13:21:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:30.775 13:21:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.775 No valid GPT data, bailing 00:04:30.775 13:21:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.775 13:21:00 -- scripts/common.sh@394 -- # pt= 00:04:30.775 13:21:00 -- scripts/common.sh@395 -- # return 1 00:04:30.775 13:21:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.775 1+0 records in 00:04:30.775 1+0 records out 00:04:30.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061921 s, 169 MB/s 00:04:30.775 13:21:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.775 13:21:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.775 13:21:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:30.775 13:21:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:30.775 13:21:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:30.775 No valid GPT data, bailing 00:04:30.775 13:21:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:30.775 13:21:00 -- scripts/common.sh@394 -- # pt= 00:04:30.775 13:21:00 -- scripts/common.sh@395 -- # return 1 00:04:30.775 13:21:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:30.775 1+0 records in 00:04:30.775 1+0 records out 00:04:30.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546948 s, 192 MB/s 00:04:30.775 13:21:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.775 13:21:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.775 13:21:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:30.775 13:21:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:30.775 13:21:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:31.035 No valid GPT data, bailing 00:04:31.035 13:21:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:31.035 13:21:00 -- scripts/common.sh@394 -- # pt= 00:04:31.035 13:21:00 -- scripts/common.sh@395 -- # return 1 00:04:31.035 13:21:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:31.035 1+0 records in 00:04:31.035 1+0 records out 00:04:31.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666694 s, 157 MB/s 00:04:31.036 13:21:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:31.036 13:21:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:31.036 13:21:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:31.036 13:21:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:31.036 13:21:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:31.036 No valid GPT data, bailing 00:04:31.036 13:21:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:31.036 13:21:00 -- scripts/common.sh@394 -- # pt= 00:04:31.036 13:21:00 -- scripts/common.sh@395 -- # return 1 00:04:31.036 13:21:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:31.036 1+0 records in 00:04:31.036 1+0 records out 00:04:31.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595886 s, 176 MB/s 00:04:31.036 13:21:00 -- spdk/autotest.sh@105 -- # sync 00:04:31.036 13:21:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:31.036 13:21:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:31.036 13:21:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.380 13:21:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:34.381 13:21:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:34.381 13:21:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:34.381 13:21:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.640 Hugepages 00:04:34.640 node hugesize free / total 00:04:34.640 node0 1048576kB 0 / 0 00:04:34.640 node0 2048kB 0 / 0 00:04:34.640 00:04:34.640 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:34.640 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:34.899 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:34.899 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:34.899 13:21:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:34.899 13:21:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:34.899 13:21:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:34.899 13:21:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.836 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.836 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.095 13:21:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:37.035 13:21:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:37.035 13:21:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:37.035 13:21:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:37.035 13:21:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:37.035 13:21:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:37.035 13:21:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:37.035 13:21:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.035 13:21:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:37.035 13:21:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:37.035 13:21:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:37.035 13:21:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:37.035 13:21:07 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.605 Waiting for block devices as requested 00:04:37.605 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.866 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:37.866 13:21:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.866 13:21:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:37.866 13:21:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.866 13:21:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.866 13:21:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1543 -- # continue 00:04:37.866 13:21:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.866 13:21:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:37.866 13:21:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.866 13:21:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:37.866 13:21:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.866 13:21:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.866 13:21:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.866 13:21:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.866 13:21:07 -- common/autotest_common.sh@1543 -- # continue 00:04:37.866 13:21:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:37.866 13:21:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.866 13:21:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.866 13:21:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:37.866 13:21:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.866 13:21:07 -- common/autotest_common.sh@10 -- # set +x 00:04:37.866 13:21:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.806 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.065 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.065 13:21:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:39.065 13:21:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.065 13:21:08 -- common/autotest_common.sh@10 -- # set +x 00:04:39.065 13:21:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:39.065 13:21:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:39.066 13:21:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:39.066 13:21:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:39.066 13:21:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:39.066 13:21:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:39.066 13:21:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:39.066 13:21:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:39.066 13:21:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:39.066 13:21:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:39.066 13:21:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.066 13:21:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.066 13:21:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:39.066 13:21:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:39.066 13:21:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:39.066 13:21:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:39.066 13:21:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:39.066 13:21:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:39.066 13:21:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.066 13:21:09 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:39.066 13:21:09 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:39.066 13:21:09 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:39.066 13:21:09 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:39.066 13:21:09 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:39.066 13:21:09 -- common/autotest_common.sh@1572 -- # return 0 00:04:39.066 13:21:09 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:39.066 13:21:09 -- common/autotest_common.sh@1580 -- # return 0 00:04:39.066 13:21:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:39.066 13:21:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:39.066 13:21:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:39.066 13:21:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:39.066 13:21:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:39.066 13:21:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.066 13:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:39.066 13:21:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:39.066 13:21:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.066 13:21:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.066 13:21:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.066 13:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:39.335 ************************************ 00:04:39.335 START TEST env 00:04:39.335 ************************************ 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:39.335 * Looking for test storage... 00:04:39.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.335 13:21:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.335 13:21:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.335 13:21:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.335 13:21:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.335 13:21:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.335 13:21:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.335 13:21:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.335 13:21:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.335 13:21:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.335 13:21:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.335 13:21:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.335 13:21:09 env -- scripts/common.sh@344 -- # case "$op" in 00:04:39.335 13:21:09 env -- scripts/common.sh@345 -- # : 1 00:04:39.335 13:21:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.335 13:21:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.335 13:21:09 env -- scripts/common.sh@365 -- # decimal 1 00:04:39.335 13:21:09 env -- scripts/common.sh@353 -- # local d=1 00:04:39.335 13:21:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.335 13:21:09 env -- scripts/common.sh@355 -- # echo 1 00:04:39.335 13:21:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.335 13:21:09 env -- scripts/common.sh@366 -- # decimal 2 00:04:39.335 13:21:09 env -- scripts/common.sh@353 -- # local d=2 00:04:39.335 13:21:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.335 13:21:09 env -- scripts/common.sh@355 -- # echo 2 00:04:39.335 13:21:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.335 13:21:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.335 13:21:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.335 13:21:09 env -- scripts/common.sh@368 -- # return 0 00:04:39.335 13:21:09 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.336 13:21:09 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.336 --rc genhtml_branch_coverage=1 00:04:39.336 --rc genhtml_function_coverage=1 00:04:39.336 --rc genhtml_legend=1 00:04:39.336 --rc geninfo_all_blocks=1 00:04:39.336 --rc geninfo_unexecuted_blocks=1 00:04:39.336 00:04:39.336 ' 00:04:39.336 13:21:09 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.336 --rc genhtml_branch_coverage=1 00:04:39.336 --rc genhtml_function_coverage=1 00:04:39.336 --rc genhtml_legend=1 00:04:39.336 --rc geninfo_all_blocks=1 00:04:39.336 --rc geninfo_unexecuted_blocks=1 00:04:39.336 00:04:39.336 ' 00:04:39.336 13:21:09 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.336 --rc genhtml_branch_coverage=1 00:04:39.336 --rc genhtml_function_coverage=1 00:04:39.336 --rc genhtml_legend=1 00:04:39.336 --rc geninfo_all_blocks=1 00:04:39.336 --rc geninfo_unexecuted_blocks=1 00:04:39.336 00:04:39.336 ' 00:04:39.336 13:21:09 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.336 --rc genhtml_branch_coverage=1 00:04:39.336 --rc genhtml_function_coverage=1 00:04:39.336 --rc genhtml_legend=1 00:04:39.336 --rc geninfo_all_blocks=1 00:04:39.336 --rc geninfo_unexecuted_blocks=1 00:04:39.336 00:04:39.336 ' 00:04:39.336 13:21:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.336 13:21:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.337 13:21:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.337 13:21:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.337 ************************************ 00:04:39.337 START TEST env_memory 00:04:39.337 ************************************ 00:04:39.337 13:21:09 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:39.337 00:04:39.337 00:04:39.337 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.337 http://cunit.sourceforge.net/ 00:04:39.337 00:04:39.337 00:04:39.337 Suite: memory 00:04:39.600 Test: alloc and free memory map ...[2024-11-18 13:21:09.423388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:39.600 passed 00:04:39.600 Test: mem map translation ...[2024-11-18 13:21:09.466729] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:39.600 [2024-11-18 13:21:09.466769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:39.600 [2024-11-18 13:21:09.466846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:39.600 [2024-11-18 13:21:09.466866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:39.600 passed 00:04:39.600 Test: mem map registration ...[2024-11-18 13:21:09.532035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:39.600 [2024-11-18 13:21:09.532081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:39.600 passed 00:04:39.600 Test: mem map adjacent registrations ...passed 00:04:39.600 00:04:39.600 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.600 suites 1 1 n/a 0 0 00:04:39.600 tests 4 4 4 0 0 00:04:39.600 asserts 152 152 152 0 n/a 00:04:39.600 00:04:39.600 Elapsed time = 0.237 seconds 00:04:39.600 00:04:39.600 real 0m0.289s 00:04:39.600 user 0m0.245s 00:04:39.600 sys 0m0.032s 00:04:39.600 13:21:09 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.600 13:21:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:39.600 ************************************ 00:04:39.600 END TEST env_memory 00:04:39.600 ************************************ 00:04:39.860 13:21:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.860 13:21:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.860 13:21:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.860 13:21:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.860 ************************************ 00:04:39.860 START TEST env_vtophys 00:04:39.860 ************************************ 00:04:39.860 13:21:09 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:39.860 EAL: lib.eal log level changed from notice to debug 00:04:39.860 EAL: Detected lcore 0 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 1 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 2 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 3 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 4 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 5 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 6 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 7 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 8 as core 0 on socket 0 00:04:39.860 EAL: Detected lcore 9 as core 0 on socket 0 00:04:39.860 EAL: Maximum logical cores by configuration: 128 00:04:39.860 EAL: Detected CPU lcores: 10 00:04:39.860 EAL: Detected NUMA nodes: 1 00:04:39.860 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:39.860 EAL: Detected shared linkage of DPDK 00:04:39.860 EAL: No shared files mode enabled, IPC will be disabled 00:04:39.860 EAL: Selected IOVA mode 'PA' 00:04:39.860 EAL: Probing VFIO support... 00:04:39.860 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.860 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:39.860 EAL: Ask a virtual area of 0x2e000 bytes 00:04:39.860 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:39.860 EAL: Setting up physically contiguous memory... 00:04:39.860 EAL: Setting maximum number of open files to 524288 00:04:39.860 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:39.860 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:39.860 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.860 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:39.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.860 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.860 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:39.860 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:39.860 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.860 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:39.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.860 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.860 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:39.860 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:39.860 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.860 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:39.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.860 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.860 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:39.860 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:39.860 EAL: Ask a virtual area of 0x61000 bytes 00:04:39.860 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:39.860 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:39.860 EAL: Ask a virtual area of 0x400000000 bytes 00:04:39.860 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:39.860 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:39.860 EAL: Hugepages will be freed exactly as allocated. 00:04:39.860 EAL: No shared files mode enabled, IPC is disabled 00:04:39.860 EAL: No shared files mode enabled, IPC is disabled 00:04:39.860 EAL: TSC frequency is ~2290000 KHz 00:04:39.860 EAL: Main lcore 0 is ready (tid=7f5f05917a40;cpuset=[0]) 00:04:39.860 EAL: Trying to obtain current memory policy. 00:04:39.860 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.860 EAL: Restoring previous memory policy: 0 00:04:39.860 EAL: request: mp_malloc_sync 00:04:39.860 EAL: No shared files mode enabled, IPC is disabled 00:04:39.860 EAL: Heap on socket 0 was expanded by 2MB 00:04:39.860 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:39.860 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:39.860 EAL: Mem event callback 'spdk:(nil)' registered 00:04:39.860 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:40.119 00:04:40.119 00:04:40.119 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.119 http://cunit.sourceforge.net/ 00:04:40.119 00:04:40.119 00:04:40.119 Suite: components_suite 00:04:40.378 Test: vtophys_malloc_test ...passed 00:04:40.378 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:40.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.378 EAL: Restoring previous memory policy: 4 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was expanded by 4MB 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was shrunk by 4MB 00:04:40.378 EAL: Trying to obtain current memory policy. 00:04:40.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.378 EAL: Restoring previous memory policy: 4 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was expanded by 6MB 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was shrunk by 6MB 00:04:40.378 EAL: Trying to obtain current memory policy. 00:04:40.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.378 EAL: Restoring previous memory policy: 4 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was expanded by 10MB 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was shrunk by 10MB 00:04:40.378 EAL: Trying to obtain current memory policy. 00:04:40.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.378 EAL: Restoring previous memory policy: 4 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was expanded by 18MB 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was shrunk by 18MB 00:04:40.378 EAL: Trying to obtain current memory policy. 00:04:40.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.378 EAL: Restoring previous memory policy: 4 00:04:40.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.378 EAL: request: mp_malloc_sync 00:04:40.378 EAL: No shared files mode enabled, IPC is disabled 00:04:40.378 EAL: Heap on socket 0 was expanded by 34MB 00:04:40.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.637 EAL: request: mp_malloc_sync 00:04:40.637 EAL: No shared files mode enabled, IPC is disabled 00:04:40.637 EAL: Heap on socket 0 was shrunk by 34MB 00:04:40.637 EAL: Trying to obtain current memory policy. 00:04:40.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.637 EAL: Restoring previous memory policy: 4 00:04:40.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.637 EAL: request: mp_malloc_sync 00:04:40.637 EAL: No shared files mode enabled, IPC is disabled 00:04:40.637 EAL: Heap on socket 0 was expanded by 66MB 00:04:40.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.637 EAL: request: mp_malloc_sync 00:04:40.637 EAL: No shared files mode enabled, IPC is disabled 00:04:40.637 EAL: Heap on socket 0 was shrunk by 66MB 00:04:40.896 EAL: Trying to obtain current memory policy. 00:04:40.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.896 EAL: Restoring previous memory policy: 4 00:04:40.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.896 EAL: request: mp_malloc_sync 00:04:40.896 EAL: No shared files mode enabled, IPC is disabled 00:04:40.896 EAL: Heap on socket 0 was expanded by 130MB 00:04:41.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.154 EAL: request: mp_malloc_sync 00:04:41.154 EAL: No shared files mode enabled, IPC is disabled 00:04:41.154 EAL: Heap on socket 0 was shrunk by 130MB 00:04:41.412 EAL: Trying to obtain current memory policy. 00:04:41.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.412 EAL: Restoring previous memory policy: 4 00:04:41.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.412 EAL: request: mp_malloc_sync 00:04:41.412 EAL: No shared files mode enabled, IPC is disabled 00:04:41.412 EAL: Heap on socket 0 was expanded by 258MB 00:04:41.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.980 EAL: request: mp_malloc_sync 00:04:41.980 EAL: No shared files mode enabled, IPC is disabled 00:04:41.980 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.548 EAL: Trying to obtain current memory policy. 00:04:42.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.548 EAL: Restoring previous memory policy: 4 00:04:42.548 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.548 EAL: request: mp_malloc_sync 00:04:42.548 EAL: No shared files mode enabled, IPC is disabled 00:04:42.548 EAL: Heap on socket 0 was expanded by 514MB 00:04:43.484 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.484 EAL: request: mp_malloc_sync 00:04:43.484 EAL: No shared files mode enabled, IPC is disabled 00:04:43.484 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.419 EAL: Trying to obtain current memory policy. 00:04:44.419 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.419 EAL: Restoring previous memory policy: 4 00:04:44.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.419 EAL: request: mp_malloc_sync 00:04:44.419 EAL: No shared files mode enabled, IPC is disabled 00:04:44.419 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.333 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.333 EAL: request: mp_malloc_sync 00:04:46.333 EAL: No shared files mode enabled, IPC is disabled 00:04:46.333 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:48.234 passed 00:04:48.234 00:04:48.234 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.234 suites 1 1 n/a 0 0 00:04:48.234 tests 2 2 2 0 0 00:04:48.234 asserts 5642 5642 5642 0 n/a 00:04:48.234 00:04:48.234 Elapsed time = 8.072 seconds 00:04:48.234 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.234 EAL: request: mp_malloc_sync 00:04:48.234 EAL: No shared files mode enabled, IPC is disabled 00:04:48.234 EAL: Heap on socket 0 was shrunk by 2MB 00:04:48.234 EAL: No shared files mode enabled, IPC is disabled 00:04:48.234 EAL: No shared files mode enabled, IPC is disabled 00:04:48.234 EAL: No shared files mode enabled, IPC is disabled 00:04:48.234 00:04:48.234 real 0m8.394s 00:04:48.234 user 0m7.445s 00:04:48.234 sys 0m0.798s 00:04:48.234 13:21:18 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.234 13:21:18 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:48.234 ************************************ 00:04:48.234 END TEST env_vtophys 00:04:48.234 ************************************ 00:04:48.234 13:21:18 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.234 13:21:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.234 13:21:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.234 13:21:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.234 ************************************ 00:04:48.234 START TEST env_pci 00:04:48.234 ************************************ 00:04:48.234 13:21:18 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.234 00:04:48.234 00:04:48.234 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.234 http://cunit.sourceforge.net/ 00:04:48.234 00:04:48.234 00:04:48.234 Suite: pci 00:04:48.234 Test: pci_hook ...[2024-11-18 13:21:18.207107] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56780 has claimed it 00:04:48.234 passed 00:04:48.234 00:04:48.234 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.234 suites 1 1 n/a 0 0 00:04:48.234 tests 1 1 1 0 0 00:04:48.234 asserts 25 25 25 0 n/a 00:04:48.234 00:04:48.234 Elapsed time = 0.006 seconds 00:04:48.234 EAL: Cannot find device (10000:00:01.0) 00:04:48.234 EAL: Failed to attach device on primary process 00:04:48.234 00:04:48.234 real 0m0.109s 00:04:48.234 user 0m0.048s 00:04:48.234 sys 0m0.060s 00:04:48.234 13:21:18 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.234 13:21:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:48.234 ************************************ 00:04:48.234 END TEST env_pci 00:04:48.234 ************************************ 00:04:48.495 13:21:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:48.495 13:21:18 env -- env/env.sh@15 -- # uname 00:04:48.495 13:21:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:48.495 13:21:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:48.495 13:21:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.495 13:21:18 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:48.495 13:21:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.495 13:21:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.495 ************************************ 00:04:48.495 START TEST env_dpdk_post_init 00:04:48.495 ************************************ 00:04:48.495 13:21:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.495 EAL: Detected CPU lcores: 10 00:04:48.495 EAL: Detected NUMA nodes: 1 00:04:48.495 EAL: Detected shared linkage of DPDK 00:04:48.495 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.495 EAL: Selected IOVA mode 'PA' 00:04:48.495 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.754 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:48.754 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:48.754 Starting DPDK initialization... 00:04:48.755 Starting SPDK post initialization... 00:04:48.755 SPDK NVMe probe 00:04:48.755 Attaching to 0000:00:10.0 00:04:48.755 Attaching to 0000:00:11.0 00:04:48.755 Attached to 0000:00:10.0 00:04:48.755 Attached to 0000:00:11.0 00:04:48.755 Cleaning up... 00:04:48.755 00:04:48.755 real 0m0.281s 00:04:48.755 user 0m0.094s 00:04:48.755 sys 0m0.087s 00:04:48.755 13:21:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.755 13:21:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.755 ************************************ 00:04:48.755 END TEST env_dpdk_post_init 00:04:48.755 ************************************ 00:04:48.755 13:21:18 env -- env/env.sh@26 -- # uname 00:04:48.755 13:21:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.755 13:21:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.755 13:21:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.755 13:21:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.755 13:21:18 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.755 ************************************ 00:04:48.755 START TEST env_mem_callbacks 00:04:48.755 ************************************ 00:04:48.755 13:21:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.755 EAL: Detected CPU lcores: 10 00:04:48.755 EAL: Detected NUMA nodes: 1 00:04:48.755 EAL: Detected shared linkage of DPDK 00:04:48.755 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.755 EAL: Selected IOVA mode 'PA' 00:04:49.014 00:04:49.015 00:04:49.015 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.015 http://cunit.sourceforge.net/ 00:04:49.015 00:04:49.015 00:04:49.015 Suite: memory 00:04:49.015 Test: test ... 00:04:49.015 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.015 register 0x200000200000 2097152 00:04:49.015 malloc 3145728 00:04:49.015 register 0x200000400000 4194304 00:04:49.015 buf 0x2000004fffc0 len 3145728 PASSED 00:04:49.015 malloc 64 00:04:49.015 buf 0x2000004ffec0 len 64 PASSED 00:04:49.015 malloc 4194304 00:04:49.015 register 0x200000800000 6291456 00:04:49.015 buf 0x2000009fffc0 len 4194304 PASSED 00:04:49.015 free 0x2000004fffc0 3145728 00:04:49.015 free 0x2000004ffec0 64 00:04:49.015 unregister 0x200000400000 4194304 PASSED 00:04:49.015 free 0x2000009fffc0 4194304 00:04:49.015 unregister 0x200000800000 6291456 PASSED 00:04:49.015 malloc 8388608 00:04:49.015 register 0x200000400000 10485760 00:04:49.015 buf 0x2000005fffc0 len 8388608 PASSED 00:04:49.015 free 0x2000005fffc0 8388608 00:04:49.015 unregister 0x200000400000 10485760 PASSED 00:04:49.015 passed 00:04:49.015 00:04:49.015 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.015 suites 1 1 n/a 0 0 00:04:49.015 tests 1 1 1 0 0 00:04:49.015 asserts 15 15 15 0 n/a 00:04:49.015 00:04:49.015 Elapsed time = 0.083 seconds 00:04:49.015 00:04:49.015 real 0m0.280s 00:04:49.015 user 0m0.117s 00:04:49.015 sys 0m0.061s 00:04:49.015 13:21:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.015 13:21:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.015 ************************************ 00:04:49.015 END TEST env_mem_callbacks 00:04:49.015 ************************************ 00:04:49.015 00:04:49.015 real 0m9.910s 00:04:49.015 user 0m8.194s 00:04:49.015 sys 0m1.377s 00:04:49.015 13:21:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.015 13:21:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.015 ************************************ 00:04:49.015 END TEST env 00:04:49.015 ************************************ 00:04:49.278 13:21:19 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.278 13:21:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.278 13:21:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.278 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.278 ************************************ 00:04:49.278 START TEST rpc 00:04:49.278 ************************************ 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.278 * Looking for test storage... 00:04:49.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.278 13:21:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.278 13:21:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.278 13:21:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.278 13:21:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.278 13:21:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.278 13:21:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.278 13:21:19 rpc -- scripts/common.sh@345 -- # : 1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.278 13:21:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.278 13:21:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.278 13:21:19 rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.278 13:21:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.278 13:21:19 rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.278 13:21:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.278 13:21:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.278 13:21:19 rpc -- scripts/common.sh@368 -- # return 0 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.278 --rc genhtml_branch_coverage=1 00:04:49.278 --rc genhtml_function_coverage=1 00:04:49.278 --rc genhtml_legend=1 00:04:49.278 --rc geninfo_all_blocks=1 00:04:49.278 --rc geninfo_unexecuted_blocks=1 00:04:49.278 00:04:49.278 ' 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.278 --rc genhtml_branch_coverage=1 00:04:49.278 --rc genhtml_function_coverage=1 00:04:49.278 --rc genhtml_legend=1 00:04:49.278 --rc geninfo_all_blocks=1 00:04:49.278 --rc geninfo_unexecuted_blocks=1 00:04:49.278 00:04:49.278 ' 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.278 --rc genhtml_branch_coverage=1 00:04:49.278 --rc genhtml_function_coverage=1 00:04:49.278 --rc genhtml_legend=1 00:04:49.278 --rc geninfo_all_blocks=1 00:04:49.278 --rc geninfo_unexecuted_blocks=1 00:04:49.278 00:04:49.278 ' 00:04:49.278 13:21:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.278 --rc genhtml_branch_coverage=1 00:04:49.278 --rc genhtml_function_coverage=1 00:04:49.278 --rc genhtml_legend=1 00:04:49.278 --rc geninfo_all_blocks=1 00:04:49.278 --rc geninfo_unexecuted_blocks=1 00:04:49.278 00:04:49.278 ' 00:04:49.278 13:21:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56907 00:04:49.279 13:21:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:49.279 13:21:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.279 13:21:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56907 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 56907 ']' 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.279 13:21:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.539 [2024-11-18 13:21:19.420806] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:49.539 [2024-11-18 13:21:19.420918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56907 ] 00:04:49.798 [2024-11-18 13:21:19.596097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.798 [2024-11-18 13:21:19.714405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.798 [2024-11-18 13:21:19.714457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56907' to capture a snapshot of events at runtime. 00:04:49.798 [2024-11-18 13:21:19.714468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.798 [2024-11-18 13:21:19.714478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.798 [2024-11-18 13:21:19.714485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56907 for offline analysis/debug. 00:04:49.798 [2024-11-18 13:21:19.715732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.771 13:21:20 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.771 13:21:20 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.771 13:21:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.771 13:21:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.771 13:21:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.771 13:21:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.771 13:21:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.771 13:21:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.771 13:21:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 ************************************ 00:04:50.771 START TEST rpc_integrity 00:04:50.771 ************************************ 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.771 { 00:04:50.771 "name": "Malloc0", 00:04:50.771 "aliases": [ 00:04:50.771 "efd0c81e-cadb-4dca-8a31-d3bb7cec1414" 00:04:50.771 ], 00:04:50.771 "product_name": "Malloc disk", 00:04:50.771 "block_size": 512, 00:04:50.771 "num_blocks": 16384, 00:04:50.771 "uuid": "efd0c81e-cadb-4dca-8a31-d3bb7cec1414", 00:04:50.771 "assigned_rate_limits": { 00:04:50.771 "rw_ios_per_sec": 0, 00:04:50.771 "rw_mbytes_per_sec": 0, 00:04:50.771 "r_mbytes_per_sec": 0, 00:04:50.771 "w_mbytes_per_sec": 0 00:04:50.771 }, 00:04:50.771 "claimed": false, 00:04:50.771 "zoned": false, 00:04:50.771 "supported_io_types": { 00:04:50.771 "read": true, 00:04:50.771 "write": true, 00:04:50.771 "unmap": true, 00:04:50.771 "flush": true, 00:04:50.771 "reset": true, 00:04:50.771 "nvme_admin": false, 00:04:50.771 "nvme_io": false, 00:04:50.771 "nvme_io_md": false, 00:04:50.771 "write_zeroes": true, 00:04:50.771 "zcopy": true, 00:04:50.771 "get_zone_info": false, 00:04:50.771 "zone_management": false, 00:04:50.771 "zone_append": false, 00:04:50.771 "compare": false, 00:04:50.771 "compare_and_write": false, 00:04:50.771 "abort": true, 00:04:50.771 "seek_hole": false, 00:04:50.771 "seek_data": false, 00:04:50.771 "copy": true, 00:04:50.771 "nvme_iov_md": false 00:04:50.771 }, 00:04:50.771 "memory_domains": [ 00:04:50.771 { 00:04:50.771 "dma_device_id": "system", 00:04:50.771 "dma_device_type": 1 00:04:50.771 }, 00:04:50.771 { 00:04:50.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.771 "dma_device_type": 2 00:04:50.771 } 00:04:50.771 ], 00:04:50.771 "driver_specific": {} 00:04:50.771 } 00:04:50.771 ]' 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 [2024-11-18 13:21:20.740889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.771 [2024-11-18 13:21:20.740948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.771 [2024-11-18 13:21:20.740974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:50.771 [2024-11-18 13:21:20.740989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.771 [2024-11-18 13:21:20.743368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.771 [2024-11-18 13:21:20.743409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.771 Passthru0 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.771 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.771 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.771 { 00:04:50.771 "name": "Malloc0", 00:04:50.771 "aliases": [ 00:04:50.771 "efd0c81e-cadb-4dca-8a31-d3bb7cec1414" 00:04:50.771 ], 00:04:50.771 "product_name": "Malloc disk", 00:04:50.771 "block_size": 512, 00:04:50.771 "num_blocks": 16384, 00:04:50.771 "uuid": "efd0c81e-cadb-4dca-8a31-d3bb7cec1414", 00:04:50.771 "assigned_rate_limits": { 00:04:50.771 "rw_ios_per_sec": 0, 00:04:50.771 "rw_mbytes_per_sec": 0, 00:04:50.771 "r_mbytes_per_sec": 0, 00:04:50.771 "w_mbytes_per_sec": 0 00:04:50.771 }, 00:04:50.771 "claimed": true, 00:04:50.771 "claim_type": "exclusive_write", 00:04:50.771 "zoned": false, 00:04:50.771 "supported_io_types": { 00:04:50.771 "read": true, 00:04:50.771 "write": true, 00:04:50.771 "unmap": true, 00:04:50.771 "flush": true, 00:04:50.771 "reset": true, 00:04:50.771 "nvme_admin": false, 00:04:50.771 "nvme_io": false, 00:04:50.771 "nvme_io_md": false, 00:04:50.771 "write_zeroes": true, 00:04:50.771 "zcopy": true, 00:04:50.771 "get_zone_info": false, 00:04:50.771 "zone_management": false, 00:04:50.771 "zone_append": false, 00:04:50.771 "compare": false, 00:04:50.771 "compare_and_write": false, 00:04:50.771 "abort": true, 00:04:50.771 "seek_hole": false, 00:04:50.771 "seek_data": false, 00:04:50.771 "copy": true, 00:04:50.771 "nvme_iov_md": false 00:04:50.771 }, 00:04:50.771 "memory_domains": [ 00:04:50.771 { 00:04:50.771 "dma_device_id": "system", 00:04:50.771 "dma_device_type": 1 00:04:50.771 }, 00:04:50.771 { 00:04:50.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.771 "dma_device_type": 2 00:04:50.771 } 00:04:50.771 ], 00:04:50.771 "driver_specific": {} 00:04:50.771 }, 00:04:50.771 { 00:04:50.771 "name": "Passthru0", 00:04:50.771 "aliases": [ 00:04:50.771 "bcf1ed36-54fb-58da-9ae0-7929f1345f08" 00:04:50.771 ], 00:04:50.771 "product_name": "passthru", 00:04:50.771 "block_size": 512, 00:04:50.771 "num_blocks": 16384, 00:04:50.771 "uuid": "bcf1ed36-54fb-58da-9ae0-7929f1345f08", 00:04:50.771 "assigned_rate_limits": { 00:04:50.771 "rw_ios_per_sec": 0, 00:04:50.771 "rw_mbytes_per_sec": 0, 00:04:50.771 "r_mbytes_per_sec": 0, 00:04:50.771 "w_mbytes_per_sec": 0 00:04:50.772 }, 00:04:50.772 "claimed": false, 00:04:50.772 "zoned": false, 00:04:50.772 "supported_io_types": { 00:04:50.772 "read": true, 00:04:50.772 "write": true, 00:04:50.772 "unmap": true, 00:04:50.772 "flush": true, 00:04:50.772 "reset": true, 00:04:50.772 "nvme_admin": false, 00:04:50.772 "nvme_io": false, 00:04:50.772 "nvme_io_md": false, 00:04:50.772 "write_zeroes": true, 00:04:50.772 "zcopy": true, 00:04:50.772 "get_zone_info": false, 00:04:50.772 "zone_management": false, 00:04:50.772 "zone_append": false, 00:04:50.772 "compare": false, 00:04:50.772 "compare_and_write": false, 00:04:50.772 "abort": true, 00:04:50.772 "seek_hole": false, 00:04:50.772 "seek_data": false, 00:04:50.772 "copy": true, 00:04:50.772 "nvme_iov_md": false 00:04:50.772 }, 00:04:50.772 "memory_domains": [ 00:04:50.772 { 00:04:50.772 "dma_device_id": "system", 00:04:50.772 "dma_device_type": 1 00:04:50.772 }, 00:04:50.772 { 00:04:50.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.772 "dma_device_type": 2 00:04:50.772 } 00:04:50.772 ], 00:04:50.772 "driver_specific": { 00:04:50.772 "passthru": { 00:04:50.772 "name": "Passthru0", 00:04:50.772 "base_bdev_name": "Malloc0" 00:04:50.772 } 00:04:50.772 } 00:04:50.772 } 00:04:50.772 ]' 00:04:50.772 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.772 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.772 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.031 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.031 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.031 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.031 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.031 13:21:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.031 00:04:51.031 real 0m0.345s 00:04:51.031 user 0m0.191s 00:04:51.031 sys 0m0.049s 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.031 13:21:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 ************************************ 00:04:51.031 END TEST rpc_integrity 00:04:51.031 ************************************ 00:04:51.031 13:21:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.031 13:21:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.031 13:21:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.031 13:21:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 ************************************ 00:04:51.031 START TEST rpc_plugins 00:04:51.031 ************************************ 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:51.031 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.031 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.031 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.031 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.031 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.032 { 00:04:51.032 "name": "Malloc1", 00:04:51.032 "aliases": [ 00:04:51.032 "f31de74c-fd5f-4972-beaa-007b2c94b53b" 00:04:51.032 ], 00:04:51.032 "product_name": "Malloc disk", 00:04:51.032 "block_size": 4096, 00:04:51.032 "num_blocks": 256, 00:04:51.032 "uuid": "f31de74c-fd5f-4972-beaa-007b2c94b53b", 00:04:51.032 "assigned_rate_limits": { 00:04:51.032 "rw_ios_per_sec": 0, 00:04:51.032 "rw_mbytes_per_sec": 0, 00:04:51.032 "r_mbytes_per_sec": 0, 00:04:51.032 "w_mbytes_per_sec": 0 00:04:51.032 }, 00:04:51.032 "claimed": false, 00:04:51.032 "zoned": false, 00:04:51.032 "supported_io_types": { 00:04:51.032 "read": true, 00:04:51.032 "write": true, 00:04:51.032 "unmap": true, 00:04:51.032 "flush": true, 00:04:51.032 "reset": true, 00:04:51.032 "nvme_admin": false, 00:04:51.032 "nvme_io": false, 00:04:51.032 "nvme_io_md": false, 00:04:51.032 "write_zeroes": true, 00:04:51.032 "zcopy": true, 00:04:51.032 "get_zone_info": false, 00:04:51.032 "zone_management": false, 00:04:51.032 "zone_append": false, 00:04:51.032 "compare": false, 00:04:51.032 "compare_and_write": false, 00:04:51.032 "abort": true, 00:04:51.032 "seek_hole": false, 00:04:51.032 "seek_data": false, 00:04:51.032 "copy": true, 00:04:51.032 "nvme_iov_md": false 00:04:51.032 }, 00:04:51.032 "memory_domains": [ 00:04:51.032 { 00:04:51.032 "dma_device_id": "system", 00:04:51.032 "dma_device_type": 1 00:04:51.032 }, 00:04:51.032 { 00:04:51.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.032 "dma_device_type": 2 00:04:51.032 } 00:04:51.032 ], 00:04:51.032 "driver_specific": {} 00:04:51.032 } 00:04:51.032 ]' 00:04:51.032 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.291 13:21:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.291 00:04:51.291 real 0m0.162s 00:04:51.291 user 0m0.087s 00:04:51.291 sys 0m0.027s 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.291 13:21:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 ************************************ 00:04:51.291 END TEST rpc_plugins 00:04:51.291 ************************************ 00:04:51.291 13:21:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.291 13:21:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.291 13:21:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.291 13:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 ************************************ 00:04:51.291 START TEST rpc_trace_cmd_test 00:04:51.291 ************************************ 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.291 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.291 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56907", 00:04:51.291 "tpoint_group_mask": "0x8", 00:04:51.291 "iscsi_conn": { 00:04:51.291 "mask": "0x2", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "scsi": { 00:04:51.291 "mask": "0x4", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "bdev": { 00:04:51.291 "mask": "0x8", 00:04:51.291 "tpoint_mask": "0xffffffffffffffff" 00:04:51.291 }, 00:04:51.291 "nvmf_rdma": { 00:04:51.291 "mask": "0x10", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "nvmf_tcp": { 00:04:51.291 "mask": "0x20", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "ftl": { 00:04:51.291 "mask": "0x40", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "blobfs": { 00:04:51.291 "mask": "0x80", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "dsa": { 00:04:51.291 "mask": "0x200", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "thread": { 00:04:51.291 "mask": "0x400", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "nvme_pcie": { 00:04:51.291 "mask": "0x800", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "iaa": { 00:04:51.291 "mask": "0x1000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "nvme_tcp": { 00:04:51.291 "mask": "0x2000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "bdev_nvme": { 00:04:51.291 "mask": "0x4000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "sock": { 00:04:51.291 "mask": "0x8000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "blob": { 00:04:51.291 "mask": "0x10000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "bdev_raid": { 00:04:51.291 "mask": "0x20000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 }, 00:04:51.291 "scheduler": { 00:04:51.291 "mask": "0x40000", 00:04:51.291 "tpoint_mask": "0x0" 00:04:51.291 } 00:04:51.291 }' 00:04:51.292 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.292 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:51.292 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.292 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.551 00:04:51.551 real 0m0.250s 00:04:51.551 user 0m0.206s 00:04:51.551 sys 0m0.032s 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.551 13:21:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 ************************************ 00:04:51.551 END TEST rpc_trace_cmd_test 00:04:51.551 ************************************ 00:04:51.551 13:21:21 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.551 13:21:21 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.551 13:21:21 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.551 13:21:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.551 13:21:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.551 13:21:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 ************************************ 00:04:51.551 START TEST rpc_daemon_integrity 00:04:51.551 ************************************ 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.551 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.810 { 00:04:51.810 "name": "Malloc2", 00:04:51.810 "aliases": [ 00:04:51.810 "54931fe5-e930-4c2a-bcc5-2b14b04446f0" 00:04:51.810 ], 00:04:51.810 "product_name": "Malloc disk", 00:04:51.810 "block_size": 512, 00:04:51.810 "num_blocks": 16384, 00:04:51.810 "uuid": "54931fe5-e930-4c2a-bcc5-2b14b04446f0", 00:04:51.810 "assigned_rate_limits": { 00:04:51.810 "rw_ios_per_sec": 0, 00:04:51.810 "rw_mbytes_per_sec": 0, 00:04:51.810 "r_mbytes_per_sec": 0, 00:04:51.810 "w_mbytes_per_sec": 0 00:04:51.810 }, 00:04:51.810 "claimed": false, 00:04:51.810 "zoned": false, 00:04:51.810 "supported_io_types": { 00:04:51.810 "read": true, 00:04:51.810 "write": true, 00:04:51.810 "unmap": true, 00:04:51.810 "flush": true, 00:04:51.810 "reset": true, 00:04:51.810 "nvme_admin": false, 00:04:51.810 "nvme_io": false, 00:04:51.810 "nvme_io_md": false, 00:04:51.810 "write_zeroes": true, 00:04:51.810 "zcopy": true, 00:04:51.810 "get_zone_info": false, 00:04:51.810 "zone_management": false, 00:04:51.810 "zone_append": false, 00:04:51.810 "compare": false, 00:04:51.810 "compare_and_write": false, 00:04:51.810 "abort": true, 00:04:51.810 "seek_hole": false, 00:04:51.810 "seek_data": false, 00:04:51.810 "copy": true, 00:04:51.810 "nvme_iov_md": false 00:04:51.810 }, 00:04:51.810 "memory_domains": [ 00:04:51.810 { 00:04:51.810 "dma_device_id": "system", 00:04:51.810 "dma_device_type": 1 00:04:51.810 }, 00:04:51.810 { 00:04:51.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.810 "dma_device_type": 2 00:04:51.810 } 00:04:51.810 ], 00:04:51.810 "driver_specific": {} 00:04:51.810 } 00:04:51.810 ]' 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.810 [2024-11-18 13:21:21.702526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.810 [2024-11-18 13:21:21.702609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.810 [2024-11-18 13:21:21.702635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:51.810 [2024-11-18 13:21:21.702649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.810 [2024-11-18 13:21:21.705325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.810 [2024-11-18 13:21:21.705366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.810 Passthru0 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.810 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.810 { 00:04:51.810 "name": "Malloc2", 00:04:51.810 "aliases": [ 00:04:51.810 "54931fe5-e930-4c2a-bcc5-2b14b04446f0" 00:04:51.810 ], 00:04:51.810 "product_name": "Malloc disk", 00:04:51.810 "block_size": 512, 00:04:51.810 "num_blocks": 16384, 00:04:51.810 "uuid": "54931fe5-e930-4c2a-bcc5-2b14b04446f0", 00:04:51.810 "assigned_rate_limits": { 00:04:51.810 "rw_ios_per_sec": 0, 00:04:51.810 "rw_mbytes_per_sec": 0, 00:04:51.810 "r_mbytes_per_sec": 0, 00:04:51.810 "w_mbytes_per_sec": 0 00:04:51.810 }, 00:04:51.810 "claimed": true, 00:04:51.810 "claim_type": "exclusive_write", 00:04:51.810 "zoned": false, 00:04:51.810 "supported_io_types": { 00:04:51.810 "read": true, 00:04:51.810 "write": true, 00:04:51.810 "unmap": true, 00:04:51.810 "flush": true, 00:04:51.810 "reset": true, 00:04:51.810 "nvme_admin": false, 00:04:51.810 "nvme_io": false, 00:04:51.810 "nvme_io_md": false, 00:04:51.810 "write_zeroes": true, 00:04:51.810 "zcopy": true, 00:04:51.810 "get_zone_info": false, 00:04:51.810 "zone_management": false, 00:04:51.810 "zone_append": false, 00:04:51.810 "compare": false, 00:04:51.810 "compare_and_write": false, 00:04:51.810 "abort": true, 00:04:51.810 "seek_hole": false, 00:04:51.810 "seek_data": false, 00:04:51.810 "copy": true, 00:04:51.811 "nvme_iov_md": false 00:04:51.811 }, 00:04:51.811 "memory_domains": [ 00:04:51.811 { 00:04:51.811 "dma_device_id": "system", 00:04:51.811 "dma_device_type": 1 00:04:51.811 }, 00:04:51.811 { 00:04:51.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.811 "dma_device_type": 2 00:04:51.811 } 00:04:51.811 ], 00:04:51.811 "driver_specific": {} 00:04:51.811 }, 00:04:51.811 { 00:04:51.811 "name": "Passthru0", 00:04:51.811 "aliases": [ 00:04:51.811 "127e9d28-28d8-5eb1-8a10-60d2d405af82" 00:04:51.811 ], 00:04:51.811 "product_name": "passthru", 00:04:51.811 "block_size": 512, 00:04:51.811 "num_blocks": 16384, 00:04:51.811 "uuid": "127e9d28-28d8-5eb1-8a10-60d2d405af82", 00:04:51.811 "assigned_rate_limits": { 00:04:51.811 "rw_ios_per_sec": 0, 00:04:51.811 "rw_mbytes_per_sec": 0, 00:04:51.811 "r_mbytes_per_sec": 0, 00:04:51.811 "w_mbytes_per_sec": 0 00:04:51.811 }, 00:04:51.811 "claimed": false, 00:04:51.811 "zoned": false, 00:04:51.811 "supported_io_types": { 00:04:51.811 "read": true, 00:04:51.811 "write": true, 00:04:51.811 "unmap": true, 00:04:51.811 "flush": true, 00:04:51.811 "reset": true, 00:04:51.811 "nvme_admin": false, 00:04:51.811 "nvme_io": false, 00:04:51.811 "nvme_io_md": false, 00:04:51.811 "write_zeroes": true, 00:04:51.811 "zcopy": true, 00:04:51.811 "get_zone_info": false, 00:04:51.811 "zone_management": false, 00:04:51.811 "zone_append": false, 00:04:51.811 "compare": false, 00:04:51.811 "compare_and_write": false, 00:04:51.811 "abort": true, 00:04:51.811 "seek_hole": false, 00:04:51.811 "seek_data": false, 00:04:51.811 "copy": true, 00:04:51.811 "nvme_iov_md": false 00:04:51.811 }, 00:04:51.811 "memory_domains": [ 00:04:51.811 { 00:04:51.811 "dma_device_id": "system", 00:04:51.811 "dma_device_type": 1 00:04:51.811 }, 00:04:51.811 { 00:04:51.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.811 "dma_device_type": 2 00:04:51.811 } 00:04:51.811 ], 00:04:51.811 "driver_specific": { 00:04:51.811 "passthru": { 00:04:51.811 "name": "Passthru0", 00:04:51.811 "base_bdev_name": "Malloc2" 00:04:51.811 } 00:04:51.811 } 00:04:51.811 } 00:04:51.811 ]' 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.811 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.070 13:21:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.070 00:04:52.070 real 0m0.348s 00:04:52.070 user 0m0.202s 00:04:52.070 sys 0m0.046s 00:04:52.070 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.070 13:21:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.070 ************************************ 00:04:52.070 END TEST rpc_daemon_integrity 00:04:52.070 ************************************ 00:04:52.070 13:21:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.070 13:21:21 rpc -- rpc/rpc.sh@84 -- # killprocess 56907 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@954 -- # '[' -z 56907 ']' 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@958 -- # kill -0 56907 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56907 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.070 killing process with pid 56907 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56907' 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@973 -- # kill 56907 00:04:52.070 13:21:21 rpc -- common/autotest_common.sh@978 -- # wait 56907 00:04:54.600 00:04:54.600 real 0m5.313s 00:04:54.600 user 0m5.857s 00:04:54.600 sys 0m0.894s 00:04:54.600 13:21:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.600 13:21:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.600 ************************************ 00:04:54.600 END TEST rpc 00:04:54.600 ************************************ 00:04:54.600 13:21:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:54.600 13:21:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.600 13:21:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.600 13:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:54.600 ************************************ 00:04:54.600 START TEST skip_rpc 00:04:54.600 ************************************ 00:04:54.600 13:21:24 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:54.600 * Looking for test storage... 00:04:54.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:54.600 13:21:24 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.600 13:21:24 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.600 13:21:24 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.860 13:21:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.860 --rc genhtml_branch_coverage=1 00:04:54.860 --rc genhtml_function_coverage=1 00:04:54.860 --rc genhtml_legend=1 00:04:54.860 --rc geninfo_all_blocks=1 00:04:54.860 --rc geninfo_unexecuted_blocks=1 00:04:54.860 00:04:54.860 ' 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.860 --rc genhtml_branch_coverage=1 00:04:54.860 --rc genhtml_function_coverage=1 00:04:54.860 --rc genhtml_legend=1 00:04:54.860 --rc geninfo_all_blocks=1 00:04:54.860 --rc geninfo_unexecuted_blocks=1 00:04:54.860 00:04:54.860 ' 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.860 --rc genhtml_branch_coverage=1 00:04:54.860 --rc genhtml_function_coverage=1 00:04:54.860 --rc genhtml_legend=1 00:04:54.860 --rc geninfo_all_blocks=1 00:04:54.860 --rc geninfo_unexecuted_blocks=1 00:04:54.860 00:04:54.860 ' 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.860 --rc genhtml_branch_coverage=1 00:04:54.860 --rc genhtml_function_coverage=1 00:04:54.860 --rc genhtml_legend=1 00:04:54.860 --rc geninfo_all_blocks=1 00:04:54.860 --rc geninfo_unexecuted_blocks=1 00:04:54.860 00:04:54.860 ' 00:04:54.860 13:21:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.860 13:21:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.860 13:21:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.860 13:21:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.860 ************************************ 00:04:54.860 START TEST skip_rpc 00:04:54.860 ************************************ 00:04:54.860 13:21:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:54.861 13:21:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57141 00:04:54.861 13:21:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.861 13:21:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.861 13:21:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.861 [2024-11-18 13:21:24.790776] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:54.861 [2024-11-18 13:21:24.790891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57141 ] 00:04:55.149 [2024-11-18 13:21:24.966637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.149 [2024-11-18 13:21:25.084856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57141 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57141 ']' 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57141 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57141 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.429 killing process with pid 57141 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57141' 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57141 00:05:00.429 13:21:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57141 00:05:02.383 00:05:02.383 real 0m7.454s 00:05:02.383 user 0m7.000s 00:05:02.383 sys 0m0.375s 00:05:02.383 13:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.383 13:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.383 ************************************ 00:05:02.383 END TEST skip_rpc 00:05:02.383 ************************************ 00:05:02.383 13:21:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:02.383 13:21:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.383 13:21:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.383 13:21:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.383 ************************************ 00:05:02.383 START TEST skip_rpc_with_json 00:05:02.383 ************************************ 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57251 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57251 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57251 ']' 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.383 13:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.383 [2024-11-18 13:21:32.313976] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:02.383 [2024-11-18 13:21:32.314104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57251 ] 00:05:02.643 [2024-11-18 13:21:32.488198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.643 [2024-11-18 13:21:32.605671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.580 [2024-11-18 13:21:33.473624] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:03.580 request: 00:05:03.580 { 00:05:03.580 "trtype": "tcp", 00:05:03.580 "method": "nvmf_get_transports", 00:05:03.580 "req_id": 1 00:05:03.580 } 00:05:03.580 Got JSON-RPC error response 00:05:03.580 response: 00:05:03.580 { 00:05:03.580 "code": -19, 00:05:03.580 "message": "No such device" 00:05:03.580 } 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.580 [2024-11-18 13:21:33.485742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.580 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.840 { 00:05:03.840 "subsystems": [ 00:05:03.840 { 00:05:03.840 "subsystem": "fsdev", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "fsdev_set_opts", 00:05:03.840 "params": { 00:05:03.840 "fsdev_io_pool_size": 65535, 00:05:03.840 "fsdev_io_cache_size": 256 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "keyring", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "iobuf", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "iobuf_set_options", 00:05:03.840 "params": { 00:05:03.840 "small_pool_count": 8192, 00:05:03.840 "large_pool_count": 1024, 00:05:03.840 "small_bufsize": 8192, 00:05:03.840 "large_bufsize": 135168, 00:05:03.840 "enable_numa": false 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "sock", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "sock_set_default_impl", 00:05:03.840 "params": { 00:05:03.840 "impl_name": "posix" 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "sock_impl_set_options", 00:05:03.840 "params": { 00:05:03.840 "impl_name": "ssl", 00:05:03.840 "recv_buf_size": 4096, 00:05:03.840 "send_buf_size": 4096, 00:05:03.840 "enable_recv_pipe": true, 00:05:03.840 "enable_quickack": false, 00:05:03.840 "enable_placement_id": 0, 00:05:03.840 "enable_zerocopy_send_server": true, 00:05:03.840 "enable_zerocopy_send_client": false, 00:05:03.840 "zerocopy_threshold": 0, 00:05:03.840 "tls_version": 0, 00:05:03.840 "enable_ktls": false 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "sock_impl_set_options", 00:05:03.840 "params": { 00:05:03.840 "impl_name": "posix", 00:05:03.840 "recv_buf_size": 2097152, 00:05:03.840 "send_buf_size": 2097152, 00:05:03.840 "enable_recv_pipe": true, 00:05:03.840 "enable_quickack": false, 00:05:03.840 "enable_placement_id": 0, 00:05:03.840 "enable_zerocopy_send_server": true, 00:05:03.840 "enable_zerocopy_send_client": false, 00:05:03.840 "zerocopy_threshold": 0, 00:05:03.840 "tls_version": 0, 00:05:03.840 "enable_ktls": false 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "vmd", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "accel", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "accel_set_options", 00:05:03.840 "params": { 00:05:03.840 "small_cache_size": 128, 00:05:03.840 "large_cache_size": 16, 00:05:03.840 "task_count": 2048, 00:05:03.840 "sequence_count": 2048, 00:05:03.840 "buf_count": 2048 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "bdev", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "bdev_set_options", 00:05:03.840 "params": { 00:05:03.840 "bdev_io_pool_size": 65535, 00:05:03.840 "bdev_io_cache_size": 256, 00:05:03.840 "bdev_auto_examine": true, 00:05:03.840 "iobuf_small_cache_size": 128, 00:05:03.840 "iobuf_large_cache_size": 16 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "bdev_raid_set_options", 00:05:03.840 "params": { 00:05:03.840 "process_window_size_kb": 1024, 00:05:03.840 "process_max_bandwidth_mb_sec": 0 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "bdev_iscsi_set_options", 00:05:03.840 "params": { 00:05:03.840 "timeout_sec": 30 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "bdev_nvme_set_options", 00:05:03.840 "params": { 00:05:03.840 "action_on_timeout": "none", 00:05:03.840 "timeout_us": 0, 00:05:03.840 "timeout_admin_us": 0, 00:05:03.840 "keep_alive_timeout_ms": 10000, 00:05:03.840 "arbitration_burst": 0, 00:05:03.840 "low_priority_weight": 0, 00:05:03.840 "medium_priority_weight": 0, 00:05:03.840 "high_priority_weight": 0, 00:05:03.840 "nvme_adminq_poll_period_us": 10000, 00:05:03.840 "nvme_ioq_poll_period_us": 0, 00:05:03.840 "io_queue_requests": 0, 00:05:03.840 "delay_cmd_submit": true, 00:05:03.840 "transport_retry_count": 4, 00:05:03.840 "bdev_retry_count": 3, 00:05:03.840 "transport_ack_timeout": 0, 00:05:03.840 "ctrlr_loss_timeout_sec": 0, 00:05:03.840 "reconnect_delay_sec": 0, 00:05:03.840 "fast_io_fail_timeout_sec": 0, 00:05:03.840 "disable_auto_failback": false, 00:05:03.840 "generate_uuids": false, 00:05:03.840 "transport_tos": 0, 00:05:03.840 "nvme_error_stat": false, 00:05:03.840 "rdma_srq_size": 0, 00:05:03.840 "io_path_stat": false, 00:05:03.840 "allow_accel_sequence": false, 00:05:03.840 "rdma_max_cq_size": 0, 00:05:03.840 "rdma_cm_event_timeout_ms": 0, 00:05:03.840 "dhchap_digests": [ 00:05:03.840 "sha256", 00:05:03.840 "sha384", 00:05:03.840 "sha512" 00:05:03.840 ], 00:05:03.840 "dhchap_dhgroups": [ 00:05:03.840 "null", 00:05:03.840 "ffdhe2048", 00:05:03.840 "ffdhe3072", 00:05:03.840 "ffdhe4096", 00:05:03.840 "ffdhe6144", 00:05:03.840 "ffdhe8192" 00:05:03.840 ] 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "bdev_nvme_set_hotplug", 00:05:03.840 "params": { 00:05:03.840 "period_us": 100000, 00:05:03.840 "enable": false 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "bdev_wait_for_examine" 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "scsi", 00:05:03.840 "config": null 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "scheduler", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "framework_set_scheduler", 00:05:03.840 "params": { 00:05:03.840 "name": "static" 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "vhost_scsi", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "vhost_blk", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "ublk", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "nbd", 00:05:03.840 "config": [] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "nvmf", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "nvmf_set_config", 00:05:03.840 "params": { 00:05:03.840 "discovery_filter": "match_any", 00:05:03.840 "admin_cmd_passthru": { 00:05:03.840 "identify_ctrlr": false 00:05:03.840 }, 00:05:03.840 "dhchap_digests": [ 00:05:03.840 "sha256", 00:05:03.840 "sha384", 00:05:03.840 "sha512" 00:05:03.840 ], 00:05:03.840 "dhchap_dhgroups": [ 00:05:03.840 "null", 00:05:03.840 "ffdhe2048", 00:05:03.840 "ffdhe3072", 00:05:03.840 "ffdhe4096", 00:05:03.840 "ffdhe6144", 00:05:03.840 "ffdhe8192" 00:05:03.840 ] 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "nvmf_set_max_subsystems", 00:05:03.840 "params": { 00:05:03.840 "max_subsystems": 1024 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "nvmf_set_crdt", 00:05:03.840 "params": { 00:05:03.840 "crdt1": 0, 00:05:03.840 "crdt2": 0, 00:05:03.840 "crdt3": 0 00:05:03.840 } 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "method": "nvmf_create_transport", 00:05:03.840 "params": { 00:05:03.840 "trtype": "TCP", 00:05:03.840 "max_queue_depth": 128, 00:05:03.840 "max_io_qpairs_per_ctrlr": 127, 00:05:03.840 "in_capsule_data_size": 4096, 00:05:03.840 "max_io_size": 131072, 00:05:03.840 "io_unit_size": 131072, 00:05:03.840 "max_aq_depth": 128, 00:05:03.840 "num_shared_buffers": 511, 00:05:03.840 "buf_cache_size": 4294967295, 00:05:03.840 "dif_insert_or_strip": false, 00:05:03.840 "zcopy": false, 00:05:03.840 "c2h_success": true, 00:05:03.840 "sock_priority": 0, 00:05:03.840 "abort_timeout_sec": 1, 00:05:03.840 "ack_timeout": 0, 00:05:03.840 "data_wr_pool_size": 0 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 }, 00:05:03.840 { 00:05:03.840 "subsystem": "iscsi", 00:05:03.840 "config": [ 00:05:03.840 { 00:05:03.840 "method": "iscsi_set_options", 00:05:03.840 "params": { 00:05:03.840 "node_base": "iqn.2016-06.io.spdk", 00:05:03.840 "max_sessions": 128, 00:05:03.840 "max_connections_per_session": 2, 00:05:03.840 "max_queue_depth": 64, 00:05:03.840 "default_time2wait": 2, 00:05:03.840 "default_time2retain": 20, 00:05:03.840 "first_burst_length": 8192, 00:05:03.840 "immediate_data": true, 00:05:03.840 "allow_duplicated_isid": false, 00:05:03.840 "error_recovery_level": 0, 00:05:03.840 "nop_timeout": 60, 00:05:03.840 "nop_in_interval": 30, 00:05:03.840 "disable_chap": false, 00:05:03.840 "require_chap": false, 00:05:03.840 "mutual_chap": false, 00:05:03.840 "chap_group": 0, 00:05:03.840 "max_large_datain_per_connection": 64, 00:05:03.840 "max_r2t_per_connection": 4, 00:05:03.840 "pdu_pool_size": 36864, 00:05:03.840 "immediate_data_pool_size": 16384, 00:05:03.840 "data_out_pool_size": 2048 00:05:03.840 } 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 } 00:05:03.840 ] 00:05:03.840 } 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57251 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57251 ']' 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57251 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57251 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.840 killing process with pid 57251 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57251' 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57251 00:05:03.840 13:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57251 00:05:06.425 13:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57296 00:05:06.425 13:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.425 13:21:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57296 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57296 ']' 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57296 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.740 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57296 00:05:11.741 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.741 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.741 killing process with pid 57296 00:05:11.741 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57296' 00:05:11.741 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57296 00:05:11.741 13:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57296 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.650 00:05:13.650 real 0m11.311s 00:05:13.650 user 0m10.781s 00:05:13.650 sys 0m0.825s 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.650 ************************************ 00:05:13.650 END TEST skip_rpc_with_json 00:05:13.650 ************************************ 00:05:13.650 13:21:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.650 13:21:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.650 13:21:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.650 13:21:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.650 ************************************ 00:05:13.650 START TEST skip_rpc_with_delay 00:05:13.650 ************************************ 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:13.650 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.650 [2024-11-18 13:21:43.685252] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.910 ************************************ 00:05:13.910 END TEST skip_rpc_with_delay 00:05:13.910 ************************************ 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.910 00:05:13.910 real 0m0.155s 00:05:13.910 user 0m0.078s 00:05:13.910 sys 0m0.075s 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.910 13:21:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.910 13:21:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.910 13:21:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.910 13:21:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.910 13:21:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.910 13:21:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.910 13:21:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.910 ************************************ 00:05:13.910 START TEST exit_on_failed_rpc_init 00:05:13.910 ************************************ 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57435 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57435 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57435 ']' 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.910 13:21:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.911 [2024-11-18 13:21:43.893529] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:13.911 [2024-11-18 13:21:43.893645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57435 ] 00:05:14.170 [2024-11-18 13:21:44.065209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.170 [2024-11-18 13:21:44.179245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.107 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.107 [2024-11-18 13:21:45.137081] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:15.107 [2024-11-18 13:21:45.137287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57453 ] 00:05:15.365 [2024-11-18 13:21:45.303914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.624 [2024-11-18 13:21:45.419191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.624 [2024-11-18 13:21:45.419385] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:15.624 [2024-11-18 13:21:45.419449] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:15.624 [2024-11-18 13:21:45.419487] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57435 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57435 ']' 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57435 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57435 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57435' 00:05:15.884 killing process with pid 57435 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57435 00:05:15.884 13:21:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57435 00:05:18.419 00:05:18.419 real 0m4.336s 00:05:18.419 user 0m4.682s 00:05:18.419 sys 0m0.553s 00:05:18.419 13:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.419 ************************************ 00:05:18.419 END TEST exit_on_failed_rpc_init 00:05:18.419 ************************************ 00:05:18.419 13:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.419 13:21:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.419 00:05:18.419 real 0m23.733s 00:05:18.419 user 0m22.748s 00:05:18.419 sys 0m2.110s 00:05:18.419 13:21:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.419 ************************************ 00:05:18.419 END TEST skip_rpc 00:05:18.419 ************************************ 00:05:18.419 13:21:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.419 13:21:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:18.419 13:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.419 13:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.419 13:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.419 ************************************ 00:05:18.419 START TEST rpc_client 00:05:18.419 ************************************ 00:05:18.419 13:21:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:18.419 * Looking for test storage... 00:05:18.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:18.419 13:21:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.419 13:21:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.419 13:21:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.419 13:21:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.419 13:21:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:18.679 13:21:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.679 13:21:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.679 13:21:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.679 13:21:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.679 --rc genhtml_branch_coverage=1 00:05:18.679 --rc genhtml_function_coverage=1 00:05:18.679 --rc genhtml_legend=1 00:05:18.679 --rc geninfo_all_blocks=1 00:05:18.679 --rc geninfo_unexecuted_blocks=1 00:05:18.679 00:05:18.679 ' 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.679 --rc genhtml_branch_coverage=1 00:05:18.679 --rc genhtml_function_coverage=1 00:05:18.679 --rc genhtml_legend=1 00:05:18.679 --rc geninfo_all_blocks=1 00:05:18.679 --rc geninfo_unexecuted_blocks=1 00:05:18.679 00:05:18.679 ' 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.679 --rc genhtml_branch_coverage=1 00:05:18.679 --rc genhtml_function_coverage=1 00:05:18.679 --rc genhtml_legend=1 00:05:18.679 --rc geninfo_all_blocks=1 00:05:18.679 --rc geninfo_unexecuted_blocks=1 00:05:18.679 00:05:18.679 ' 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.679 --rc genhtml_branch_coverage=1 00:05:18.679 --rc genhtml_function_coverage=1 00:05:18.679 --rc genhtml_legend=1 00:05:18.679 --rc geninfo_all_blocks=1 00:05:18.679 --rc geninfo_unexecuted_blocks=1 00:05:18.679 00:05:18.679 ' 00:05:18.679 13:21:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:18.679 OK 00:05:18.679 13:21:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.679 00:05:18.679 real 0m0.275s 00:05:18.679 user 0m0.156s 00:05:18.679 sys 0m0.133s 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.679 13:21:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.679 ************************************ 00:05:18.679 END TEST rpc_client 00:05:18.679 ************************************ 00:05:18.679 13:21:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.679 13:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.679 13:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.679 13:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.679 ************************************ 00:05:18.679 START TEST json_config 00:05:18.679 ************************************ 00:05:18.679 13:21:48 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.679 13:21:48 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.679 13:21:48 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.679 13:21:48 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.938 13:21:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.938 13:21:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.938 13:21:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.938 13:21:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.938 13:21:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.938 13:21:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.938 13:21:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:18.938 13:21:48 json_config -- scripts/common.sh@345 -- # : 1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.938 13:21:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.938 13:21:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@353 -- # local d=1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.938 13:21:48 json_config -- scripts/common.sh@355 -- # echo 1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.938 13:21:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@353 -- # local d=2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.938 13:21:48 json_config -- scripts/common.sh@355 -- # echo 2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.938 13:21:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.938 13:21:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.938 13:21:48 json_config -- scripts/common.sh@368 -- # return 0 00:05:18.938 13:21:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.939 --rc genhtml_branch_coverage=1 00:05:18.939 --rc genhtml_function_coverage=1 00:05:18.939 --rc genhtml_legend=1 00:05:18.939 --rc geninfo_all_blocks=1 00:05:18.939 --rc geninfo_unexecuted_blocks=1 00:05:18.939 00:05:18.939 ' 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.939 --rc genhtml_branch_coverage=1 00:05:18.939 --rc genhtml_function_coverage=1 00:05:18.939 --rc genhtml_legend=1 00:05:18.939 --rc geninfo_all_blocks=1 00:05:18.939 --rc geninfo_unexecuted_blocks=1 00:05:18.939 00:05:18.939 ' 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.939 --rc genhtml_branch_coverage=1 00:05:18.939 --rc genhtml_function_coverage=1 00:05:18.939 --rc genhtml_legend=1 00:05:18.939 --rc geninfo_all_blocks=1 00:05:18.939 --rc geninfo_unexecuted_blocks=1 00:05:18.939 00:05:18.939 ' 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.939 --rc genhtml_branch_coverage=1 00:05:18.939 --rc genhtml_function_coverage=1 00:05:18.939 --rc genhtml_legend=1 00:05:18.939 --rc geninfo_all_blocks=1 00:05:18.939 --rc geninfo_unexecuted_blocks=1 00:05:18.939 00:05:18.939 ' 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:60483ee9-3997-4c54-a57b-28075c2968f2 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=60483ee9-3997-4c54-a57b-28075c2968f2 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.939 13:21:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.939 13:21:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.939 13:21:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.939 13:21:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.939 13:21:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.939 13:21:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.939 13:21:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.939 13:21:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:18.939 13:21:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@51 -- # : 0 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.939 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.939 13:21:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:18.939 WARNING: No tests are enabled so not running JSON configuration tests 00:05:18.939 13:21:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:18.939 00:05:18.939 real 0m0.217s 00:05:18.939 user 0m0.131s 00:05:18.939 sys 0m0.091s 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.939 13:21:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.939 ************************************ 00:05:18.939 END TEST json_config 00:05:18.939 ************************************ 00:05:18.939 13:21:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.939 13:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.939 13:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.939 13:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.939 ************************************ 00:05:18.939 START TEST json_config_extra_key 00:05:18.939 ************************************ 00:05:18.939 13:21:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.939 13:21:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.939 13:21:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.939 13:21:48 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.200 --rc genhtml_branch_coverage=1 00:05:19.200 --rc genhtml_function_coverage=1 00:05:19.200 --rc genhtml_legend=1 00:05:19.200 --rc geninfo_all_blocks=1 00:05:19.200 --rc geninfo_unexecuted_blocks=1 00:05:19.200 00:05:19.200 ' 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.200 --rc genhtml_branch_coverage=1 00:05:19.200 --rc genhtml_function_coverage=1 00:05:19.200 --rc genhtml_legend=1 00:05:19.200 --rc geninfo_all_blocks=1 00:05:19.200 --rc geninfo_unexecuted_blocks=1 00:05:19.200 00:05:19.200 ' 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.200 --rc genhtml_branch_coverage=1 00:05:19.200 --rc genhtml_function_coverage=1 00:05:19.200 --rc genhtml_legend=1 00:05:19.200 --rc geninfo_all_blocks=1 00:05:19.200 --rc geninfo_unexecuted_blocks=1 00:05:19.200 00:05:19.200 ' 00:05:19.200 13:21:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.200 --rc genhtml_branch_coverage=1 00:05:19.200 --rc genhtml_function_coverage=1 00:05:19.200 --rc genhtml_legend=1 00:05:19.200 --rc geninfo_all_blocks=1 00:05:19.200 --rc geninfo_unexecuted_blocks=1 00:05:19.200 00:05:19.200 ' 00:05:19.200 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:60483ee9-3997-4c54-a57b-28075c2968f2 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=60483ee9-3997-4c54-a57b-28075c2968f2 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.200 13:21:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.200 13:21:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.200 13:21:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.200 13:21:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.200 13:21:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.200 13:21:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.200 13:21:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.201 13:21:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.201 INFO: launching applications... 00:05:19.201 13:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57663 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.201 Waiting for target to run... 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57663 /var/tmp/spdk_tgt.sock 00:05:19.201 13:21:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57663 ']' 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.201 13:21:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.201 [2024-11-18 13:21:49.194334] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:19.201 [2024-11-18 13:21:49.194463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57663 ] 00:05:19.766 [2024-11-18 13:21:49.577014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.766 [2024-11-18 13:21:49.679847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.700 00:05:20.701 INFO: shutting down applications... 00:05:20.701 13:21:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.701 13:21:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.701 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.701 13:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57663 ]] 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57663 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:20.701 13:21:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.959 13:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.959 13:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.959 13:21:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:20.959 13:21:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.524 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.524 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.524 13:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:21.524 13:21:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.147 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.147 13:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.147 13:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:22.147 13:21:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.407 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.407 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.407 13:21:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:22.407 13:21:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.975 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.975 13:21:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.975 13:21:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:22.975 13:21:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57663 00:05:23.544 SPDK target shutdown done 00:05:23.544 Success 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.544 13:21:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.544 13:21:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.544 00:05:23.544 real 0m4.594s 00:05:23.544 user 0m4.160s 00:05:23.544 sys 0m0.553s 00:05:23.544 13:21:53 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.544 13:21:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 END TEST json_config_extra_key 00:05:23.544 ************************************ 00:05:23.544 13:21:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.544 13:21:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.544 13:21:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.544 13:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 START TEST alias_rpc 00:05:23.544 ************************************ 00:05:23.544 13:21:53 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.803 * Looking for test storage... 00:05:23.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.803 13:21:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 13:21:53 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.804 --rc genhtml_branch_coverage=1 00:05:23.804 --rc genhtml_function_coverage=1 00:05:23.804 --rc genhtml_legend=1 00:05:23.804 --rc geninfo_all_blocks=1 00:05:23.804 --rc geninfo_unexecuted_blocks=1 00:05:23.804 00:05:23.804 ' 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.804 --rc genhtml_branch_coverage=1 00:05:23.804 --rc genhtml_function_coverage=1 00:05:23.804 --rc genhtml_legend=1 00:05:23.804 --rc geninfo_all_blocks=1 00:05:23.804 --rc geninfo_unexecuted_blocks=1 00:05:23.804 00:05:23.804 ' 00:05:23.804 13:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.804 13:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57775 00:05:23.804 13:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.804 13:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57775 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57775 ']' 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.804 13:21:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.804 [2024-11-18 13:21:53.843743] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:23.804 [2024-11-18 13:21:53.843872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:05:24.063 [2024-11-18 13:21:54.016898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.321 [2024-11-18 13:21:54.132317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.260 13:21:54 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.260 13:21:54 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.260 13:21:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:25.260 13:21:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57775 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57775 ']' 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57775 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57775 00:05:25.260 killing process with pid 57775 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57775' 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 57775 00:05:25.260 13:21:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 57775 00:05:27.791 ************************************ 00:05:27.791 END TEST alias_rpc 00:05:27.791 ************************************ 00:05:27.791 00:05:27.791 real 0m4.115s 00:05:27.791 user 0m4.097s 00:05:27.791 sys 0m0.572s 00:05:27.791 13:21:57 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.791 13:21:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.791 13:21:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:27.791 13:21:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:27.791 13:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.791 13:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.791 13:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.791 ************************************ 00:05:27.791 START TEST spdkcli_tcp 00:05:27.791 ************************************ 00:05:27.791 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:27.791 * Looking for test storage... 00:05:27.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:27.791 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.791 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.791 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.051 13:21:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.051 --rc genhtml_branch_coverage=1 00:05:28.051 --rc genhtml_function_coverage=1 00:05:28.051 --rc genhtml_legend=1 00:05:28.051 --rc geninfo_all_blocks=1 00:05:28.051 --rc geninfo_unexecuted_blocks=1 00:05:28.051 00:05:28.051 ' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.051 --rc genhtml_branch_coverage=1 00:05:28.051 --rc genhtml_function_coverage=1 00:05:28.051 --rc genhtml_legend=1 00:05:28.051 --rc geninfo_all_blocks=1 00:05:28.051 --rc geninfo_unexecuted_blocks=1 00:05:28.051 00:05:28.051 ' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.051 --rc genhtml_branch_coverage=1 00:05:28.051 --rc genhtml_function_coverage=1 00:05:28.051 --rc genhtml_legend=1 00:05:28.051 --rc geninfo_all_blocks=1 00:05:28.051 --rc geninfo_unexecuted_blocks=1 00:05:28.051 00:05:28.051 ' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.051 --rc genhtml_branch_coverage=1 00:05:28.051 --rc genhtml_function_coverage=1 00:05:28.051 --rc genhtml_legend=1 00:05:28.051 --rc geninfo_all_blocks=1 00:05:28.051 --rc geninfo_unexecuted_blocks=1 00:05:28.051 00:05:28.051 ' 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57882 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.051 13:21:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57882 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57882 ']' 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.051 13:21:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.051 [2024-11-18 13:21:58.037368] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:28.051 [2024-11-18 13:21:58.037993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57882 ] 00:05:28.310 [2024-11-18 13:21:58.202834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.310 [2024-11-18 13:21:58.323238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.310 [2024-11-18 13:21:58.323275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.247 13:21:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.247 13:21:59 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:29.247 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57899 00:05:29.247 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.247 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.507 [ 00:05:29.507 "bdev_malloc_delete", 00:05:29.507 "bdev_malloc_create", 00:05:29.507 "bdev_null_resize", 00:05:29.507 "bdev_null_delete", 00:05:29.507 "bdev_null_create", 00:05:29.507 "bdev_nvme_cuse_unregister", 00:05:29.507 "bdev_nvme_cuse_register", 00:05:29.507 "bdev_opal_new_user", 00:05:29.507 "bdev_opal_set_lock_state", 00:05:29.507 "bdev_opal_delete", 00:05:29.507 "bdev_opal_get_info", 00:05:29.507 "bdev_opal_create", 00:05:29.507 "bdev_nvme_opal_revert", 00:05:29.507 "bdev_nvme_opal_init", 00:05:29.507 "bdev_nvme_send_cmd", 00:05:29.507 "bdev_nvme_set_keys", 00:05:29.507 "bdev_nvme_get_path_iostat", 00:05:29.507 "bdev_nvme_get_mdns_discovery_info", 00:05:29.507 "bdev_nvme_stop_mdns_discovery", 00:05:29.507 "bdev_nvme_start_mdns_discovery", 00:05:29.507 "bdev_nvme_set_multipath_policy", 00:05:29.507 "bdev_nvme_set_preferred_path", 00:05:29.507 "bdev_nvme_get_io_paths", 00:05:29.507 "bdev_nvme_remove_error_injection", 00:05:29.507 "bdev_nvme_add_error_injection", 00:05:29.507 "bdev_nvme_get_discovery_info", 00:05:29.507 "bdev_nvme_stop_discovery", 00:05:29.507 "bdev_nvme_start_discovery", 00:05:29.507 "bdev_nvme_get_controller_health_info", 00:05:29.507 "bdev_nvme_disable_controller", 00:05:29.507 "bdev_nvme_enable_controller", 00:05:29.507 "bdev_nvme_reset_controller", 00:05:29.507 "bdev_nvme_get_transport_statistics", 00:05:29.507 "bdev_nvme_apply_firmware", 00:05:29.507 "bdev_nvme_detach_controller", 00:05:29.507 "bdev_nvme_get_controllers", 00:05:29.507 "bdev_nvme_attach_controller", 00:05:29.507 "bdev_nvme_set_hotplug", 00:05:29.507 "bdev_nvme_set_options", 00:05:29.507 "bdev_passthru_delete", 00:05:29.507 "bdev_passthru_create", 00:05:29.507 "bdev_lvol_set_parent_bdev", 00:05:29.507 "bdev_lvol_set_parent", 00:05:29.507 "bdev_lvol_check_shallow_copy", 00:05:29.507 "bdev_lvol_start_shallow_copy", 00:05:29.508 "bdev_lvol_grow_lvstore", 00:05:29.508 "bdev_lvol_get_lvols", 00:05:29.508 "bdev_lvol_get_lvstores", 00:05:29.508 "bdev_lvol_delete", 00:05:29.508 "bdev_lvol_set_read_only", 00:05:29.508 "bdev_lvol_resize", 00:05:29.508 "bdev_lvol_decouple_parent", 00:05:29.508 "bdev_lvol_inflate", 00:05:29.508 "bdev_lvol_rename", 00:05:29.508 "bdev_lvol_clone_bdev", 00:05:29.508 "bdev_lvol_clone", 00:05:29.508 "bdev_lvol_snapshot", 00:05:29.508 "bdev_lvol_create", 00:05:29.508 "bdev_lvol_delete_lvstore", 00:05:29.508 "bdev_lvol_rename_lvstore", 00:05:29.508 "bdev_lvol_create_lvstore", 00:05:29.508 "bdev_raid_set_options", 00:05:29.508 "bdev_raid_remove_base_bdev", 00:05:29.508 "bdev_raid_add_base_bdev", 00:05:29.508 "bdev_raid_delete", 00:05:29.508 "bdev_raid_create", 00:05:29.508 "bdev_raid_get_bdevs", 00:05:29.508 "bdev_error_inject_error", 00:05:29.508 "bdev_error_delete", 00:05:29.508 "bdev_error_create", 00:05:29.508 "bdev_split_delete", 00:05:29.508 "bdev_split_create", 00:05:29.508 "bdev_delay_delete", 00:05:29.508 "bdev_delay_create", 00:05:29.508 "bdev_delay_update_latency", 00:05:29.508 "bdev_zone_block_delete", 00:05:29.508 "bdev_zone_block_create", 00:05:29.508 "blobfs_create", 00:05:29.508 "blobfs_detect", 00:05:29.508 "blobfs_set_cache_size", 00:05:29.508 "bdev_aio_delete", 00:05:29.508 "bdev_aio_rescan", 00:05:29.508 "bdev_aio_create", 00:05:29.508 "bdev_ftl_set_property", 00:05:29.508 "bdev_ftl_get_properties", 00:05:29.508 "bdev_ftl_get_stats", 00:05:29.508 "bdev_ftl_unmap", 00:05:29.508 "bdev_ftl_unload", 00:05:29.508 "bdev_ftl_delete", 00:05:29.508 "bdev_ftl_load", 00:05:29.508 "bdev_ftl_create", 00:05:29.508 "bdev_virtio_attach_controller", 00:05:29.508 "bdev_virtio_scsi_get_devices", 00:05:29.508 "bdev_virtio_detach_controller", 00:05:29.508 "bdev_virtio_blk_set_hotplug", 00:05:29.508 "bdev_iscsi_delete", 00:05:29.508 "bdev_iscsi_create", 00:05:29.508 "bdev_iscsi_set_options", 00:05:29.508 "accel_error_inject_error", 00:05:29.508 "ioat_scan_accel_module", 00:05:29.508 "dsa_scan_accel_module", 00:05:29.508 "iaa_scan_accel_module", 00:05:29.508 "keyring_file_remove_key", 00:05:29.508 "keyring_file_add_key", 00:05:29.508 "keyring_linux_set_options", 00:05:29.508 "fsdev_aio_delete", 00:05:29.508 "fsdev_aio_create", 00:05:29.508 "iscsi_get_histogram", 00:05:29.508 "iscsi_enable_histogram", 00:05:29.508 "iscsi_set_options", 00:05:29.508 "iscsi_get_auth_groups", 00:05:29.508 "iscsi_auth_group_remove_secret", 00:05:29.508 "iscsi_auth_group_add_secret", 00:05:29.508 "iscsi_delete_auth_group", 00:05:29.508 "iscsi_create_auth_group", 00:05:29.508 "iscsi_set_discovery_auth", 00:05:29.508 "iscsi_get_options", 00:05:29.508 "iscsi_target_node_request_logout", 00:05:29.508 "iscsi_target_node_set_redirect", 00:05:29.508 "iscsi_target_node_set_auth", 00:05:29.508 "iscsi_target_node_add_lun", 00:05:29.508 "iscsi_get_stats", 00:05:29.508 "iscsi_get_connections", 00:05:29.508 "iscsi_portal_group_set_auth", 00:05:29.508 "iscsi_start_portal_group", 00:05:29.508 "iscsi_delete_portal_group", 00:05:29.508 "iscsi_create_portal_group", 00:05:29.508 "iscsi_get_portal_groups", 00:05:29.508 "iscsi_delete_target_node", 00:05:29.508 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.508 "iscsi_target_node_add_pg_ig_maps", 00:05:29.508 "iscsi_create_target_node", 00:05:29.508 "iscsi_get_target_nodes", 00:05:29.508 "iscsi_delete_initiator_group", 00:05:29.508 "iscsi_initiator_group_remove_initiators", 00:05:29.508 "iscsi_initiator_group_add_initiators", 00:05:29.508 "iscsi_create_initiator_group", 00:05:29.508 "iscsi_get_initiator_groups", 00:05:29.508 "nvmf_set_crdt", 00:05:29.508 "nvmf_set_config", 00:05:29.508 "nvmf_set_max_subsystems", 00:05:29.508 "nvmf_stop_mdns_prr", 00:05:29.508 "nvmf_publish_mdns_prr", 00:05:29.508 "nvmf_subsystem_get_listeners", 00:05:29.508 "nvmf_subsystem_get_qpairs", 00:05:29.508 "nvmf_subsystem_get_controllers", 00:05:29.508 "nvmf_get_stats", 00:05:29.508 "nvmf_get_transports", 00:05:29.508 "nvmf_create_transport", 00:05:29.508 "nvmf_get_targets", 00:05:29.508 "nvmf_delete_target", 00:05:29.508 "nvmf_create_target", 00:05:29.508 "nvmf_subsystem_allow_any_host", 00:05:29.508 "nvmf_subsystem_set_keys", 00:05:29.508 "nvmf_subsystem_remove_host", 00:05:29.508 "nvmf_subsystem_add_host", 00:05:29.508 "nvmf_ns_remove_host", 00:05:29.508 "nvmf_ns_add_host", 00:05:29.508 "nvmf_subsystem_remove_ns", 00:05:29.508 "nvmf_subsystem_set_ns_ana_group", 00:05:29.508 "nvmf_subsystem_add_ns", 00:05:29.508 "nvmf_subsystem_listener_set_ana_state", 00:05:29.508 "nvmf_discovery_get_referrals", 00:05:29.508 "nvmf_discovery_remove_referral", 00:05:29.508 "nvmf_discovery_add_referral", 00:05:29.508 "nvmf_subsystem_remove_listener", 00:05:29.508 "nvmf_subsystem_add_listener", 00:05:29.508 "nvmf_delete_subsystem", 00:05:29.508 "nvmf_create_subsystem", 00:05:29.508 "nvmf_get_subsystems", 00:05:29.508 "env_dpdk_get_mem_stats", 00:05:29.508 "nbd_get_disks", 00:05:29.508 "nbd_stop_disk", 00:05:29.508 "nbd_start_disk", 00:05:29.508 "ublk_recover_disk", 00:05:29.508 "ublk_get_disks", 00:05:29.508 "ublk_stop_disk", 00:05:29.508 "ublk_start_disk", 00:05:29.508 "ublk_destroy_target", 00:05:29.508 "ublk_create_target", 00:05:29.508 "virtio_blk_create_transport", 00:05:29.508 "virtio_blk_get_transports", 00:05:29.508 "vhost_controller_set_coalescing", 00:05:29.508 "vhost_get_controllers", 00:05:29.508 "vhost_delete_controller", 00:05:29.508 "vhost_create_blk_controller", 00:05:29.508 "vhost_scsi_controller_remove_target", 00:05:29.508 "vhost_scsi_controller_add_target", 00:05:29.508 "vhost_start_scsi_controller", 00:05:29.508 "vhost_create_scsi_controller", 00:05:29.508 "thread_set_cpumask", 00:05:29.508 "scheduler_set_options", 00:05:29.508 "framework_get_governor", 00:05:29.508 "framework_get_scheduler", 00:05:29.508 "framework_set_scheduler", 00:05:29.508 "framework_get_reactors", 00:05:29.508 "thread_get_io_channels", 00:05:29.508 "thread_get_pollers", 00:05:29.508 "thread_get_stats", 00:05:29.508 "framework_monitor_context_switch", 00:05:29.508 "spdk_kill_instance", 00:05:29.508 "log_enable_timestamps", 00:05:29.508 "log_get_flags", 00:05:29.508 "log_clear_flag", 00:05:29.508 "log_set_flag", 00:05:29.508 "log_get_level", 00:05:29.508 "log_set_level", 00:05:29.508 "log_get_print_level", 00:05:29.508 "log_set_print_level", 00:05:29.508 "framework_enable_cpumask_locks", 00:05:29.508 "framework_disable_cpumask_locks", 00:05:29.508 "framework_wait_init", 00:05:29.508 "framework_start_init", 00:05:29.508 "scsi_get_devices", 00:05:29.508 "bdev_get_histogram", 00:05:29.508 "bdev_enable_histogram", 00:05:29.508 "bdev_set_qos_limit", 00:05:29.508 "bdev_set_qd_sampling_period", 00:05:29.508 "bdev_get_bdevs", 00:05:29.508 "bdev_reset_iostat", 00:05:29.508 "bdev_get_iostat", 00:05:29.508 "bdev_examine", 00:05:29.508 "bdev_wait_for_examine", 00:05:29.508 "bdev_set_options", 00:05:29.508 "accel_get_stats", 00:05:29.508 "accel_set_options", 00:05:29.508 "accel_set_driver", 00:05:29.508 "accel_crypto_key_destroy", 00:05:29.508 "accel_crypto_keys_get", 00:05:29.508 "accel_crypto_key_create", 00:05:29.508 "accel_assign_opc", 00:05:29.508 "accel_get_module_info", 00:05:29.508 "accel_get_opc_assignments", 00:05:29.508 "vmd_rescan", 00:05:29.508 "vmd_remove_device", 00:05:29.508 "vmd_enable", 00:05:29.508 "sock_get_default_impl", 00:05:29.508 "sock_set_default_impl", 00:05:29.508 "sock_impl_set_options", 00:05:29.508 "sock_impl_get_options", 00:05:29.508 "iobuf_get_stats", 00:05:29.508 "iobuf_set_options", 00:05:29.508 "keyring_get_keys", 00:05:29.508 "framework_get_pci_devices", 00:05:29.508 "framework_get_config", 00:05:29.508 "framework_get_subsystems", 00:05:29.508 "fsdev_set_opts", 00:05:29.508 "fsdev_get_opts", 00:05:29.508 "trace_get_info", 00:05:29.508 "trace_get_tpoint_group_mask", 00:05:29.508 "trace_disable_tpoint_group", 00:05:29.508 "trace_enable_tpoint_group", 00:05:29.508 "trace_clear_tpoint_mask", 00:05:29.508 "trace_set_tpoint_mask", 00:05:29.508 "notify_get_notifications", 00:05:29.508 "notify_get_types", 00:05:29.508 "spdk_get_version", 00:05:29.508 "rpc_get_methods" 00:05:29.508 ] 00:05:29.508 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.508 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.508 13:21:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57882 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57882 ']' 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57882 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57882 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.508 13:21:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57882' 00:05:29.509 killing process with pid 57882 00:05:29.509 13:21:59 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57882 00:05:29.509 13:21:59 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57882 00:05:32.045 ************************************ 00:05:32.045 END TEST spdkcli_tcp 00:05:32.045 ************************************ 00:05:32.045 00:05:32.045 real 0m4.185s 00:05:32.045 user 0m7.487s 00:05:32.045 sys 0m0.597s 00:05:32.045 13:22:01 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.045 13:22:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.045 13:22:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.045 13:22:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.045 13:22:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.045 13:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.045 ************************************ 00:05:32.045 START TEST dpdk_mem_utility 00:05:32.045 ************************************ 00:05:32.045 13:22:01 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.045 * Looking for test storage... 00:05:32.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:32.045 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.045 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.045 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.305 13:22:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.305 --rc genhtml_branch_coverage=1 00:05:32.305 --rc genhtml_function_coverage=1 00:05:32.305 --rc genhtml_legend=1 00:05:32.305 --rc geninfo_all_blocks=1 00:05:32.305 --rc geninfo_unexecuted_blocks=1 00:05:32.305 00:05:32.305 ' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.305 --rc genhtml_branch_coverage=1 00:05:32.305 --rc genhtml_function_coverage=1 00:05:32.305 --rc genhtml_legend=1 00:05:32.305 --rc geninfo_all_blocks=1 00:05:32.305 --rc geninfo_unexecuted_blocks=1 00:05:32.305 00:05:32.305 ' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.305 --rc genhtml_branch_coverage=1 00:05:32.305 --rc genhtml_function_coverage=1 00:05:32.305 --rc genhtml_legend=1 00:05:32.305 --rc geninfo_all_blocks=1 00:05:32.305 --rc geninfo_unexecuted_blocks=1 00:05:32.305 00:05:32.305 ' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.305 --rc genhtml_branch_coverage=1 00:05:32.305 --rc genhtml_function_coverage=1 00:05:32.305 --rc genhtml_legend=1 00:05:32.305 --rc geninfo_all_blocks=1 00:05:32.305 --rc geninfo_unexecuted_blocks=1 00:05:32.305 00:05:32.305 ' 00:05:32.305 13:22:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:32.305 13:22:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58004 00:05:32.305 13:22:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.305 13:22:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58004 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58004 ']' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.305 13:22:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.305 [2024-11-18 13:22:02.270916] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:32.305 [2024-11-18 13:22:02.271122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58004 ] 00:05:32.566 [2024-11-18 13:22:02.434427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.566 [2024-11-18 13:22:02.551449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.505 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.505 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:33.505 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.505 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.505 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.505 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.505 { 00:05:33.505 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.505 } 00:05:33.505 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.505 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.505 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:33.505 1 heaps totaling size 816.000000 MiB 00:05:33.505 size: 816.000000 MiB heap id: 0 00:05:33.505 end heaps---------- 00:05:33.505 9 mempools totaling size 595.772034 MiB 00:05:33.505 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.505 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.505 size: 92.545471 MiB name: bdev_io_58004 00:05:33.505 size: 50.003479 MiB name: msgpool_58004 00:05:33.505 size: 36.509338 MiB name: fsdev_io_58004 00:05:33.505 size: 21.763794 MiB name: PDU_Pool 00:05:33.505 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.505 size: 4.133484 MiB name: evtpool_58004 00:05:33.505 size: 0.026123 MiB name: Session_Pool 00:05:33.505 end mempools------- 00:05:33.505 6 memzones totaling size 4.142822 MiB 00:05:33.505 size: 1.000366 MiB name: RG_ring_0_58004 00:05:33.505 size: 1.000366 MiB name: RG_ring_1_58004 00:05:33.505 size: 1.000366 MiB name: RG_ring_4_58004 00:05:33.505 size: 1.000366 MiB name: RG_ring_5_58004 00:05:33.505 size: 0.125366 MiB name: RG_ring_2_58004 00:05:33.505 size: 0.015991 MiB name: RG_ring_3_58004 00:05:33.505 end memzones------- 00:05:33.505 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.505 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:05:33.505 list of free elements. size: 16.792847 MiB 00:05:33.505 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:33.505 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:33.505 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:33.505 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:33.505 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:33.505 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:33.505 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:33.505 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:33.505 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:33.505 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:33.505 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:33.505 element at address: 0x20001ac00000 with size: 0.563171 MiB 00:05:33.505 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:33.505 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:33.505 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:33.505 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:33.505 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:33.505 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:33.505 list of standard malloc elements. size: 199.286255 MiB 00:05:33.505 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:33.505 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:33.505 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:33.505 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:33.505 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:33.505 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:33.505 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:33.505 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:33.505 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:33.505 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:33.505 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:33.505 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:33.505 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:33.505 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:33.506 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:33.506 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:33.506 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:33.507 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:33.507 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:33.507 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:33.507 list of memzone associated elements. size: 599.920898 MiB 00:05:33.507 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:33.507 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.507 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:33.507 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.507 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:33.507 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58004_0 00:05:33.507 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:33.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58004_0 00:05:33.507 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:33.507 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58004_0 00:05:33.507 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:33.507 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.507 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:33.507 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.507 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:33.507 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58004_0 00:05:33.507 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:33.507 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58004 00:05:33.507 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:33.507 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58004 00:05:33.507 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:33.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.507 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:33.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.507 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:33.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.507 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:33.507 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.507 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:33.507 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58004 00:05:33.507 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:33.507 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58004 00:05:33.507 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:33.507 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58004 00:05:33.507 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:33.507 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58004 00:05:33.507 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:33.507 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58004 00:05:33.507 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:33.507 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58004 00:05:33.507 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:33.507 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.507 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:33.507 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.507 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:33.507 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.507 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:33.507 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58004 00:05:33.507 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:33.507 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58004 00:05:33.507 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:33.507 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.507 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:33.507 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.507 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:33.507 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58004 00:05:33.507 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:33.507 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.507 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:33.507 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58004 00:05:33.507 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:33.507 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58004 00:05:33.508 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:33.508 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58004 00:05:33.508 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:33.508 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.508 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.508 13:22:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58004 00:05:33.508 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58004 ']' 00:05:33.508 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58004 00:05:33.508 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:33.508 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.508 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58004 00:05:33.768 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.768 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.768 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58004' 00:05:33.768 killing process with pid 58004 00:05:33.768 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58004 00:05:33.768 13:22:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58004 00:05:36.309 00:05:36.309 real 0m3.962s 00:05:36.309 user 0m3.879s 00:05:36.309 sys 0m0.537s 00:05:36.309 13:22:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.309 13:22:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.309 ************************************ 00:05:36.309 END TEST dpdk_mem_utility 00:05:36.309 ************************************ 00:05:36.309 13:22:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:36.309 13:22:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.309 13:22:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.309 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:36.309 ************************************ 00:05:36.309 START TEST event 00:05:36.309 ************************************ 00:05:36.309 13:22:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:36.309 * Looking for test storage... 00:05:36.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.309 13:22:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.309 13:22:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.309 13:22:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.309 13:22:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.309 13:22:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.309 13:22:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.309 13:22:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.309 13:22:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.309 13:22:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.309 13:22:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.309 13:22:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.309 13:22:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.309 13:22:06 event -- scripts/common.sh@345 -- # : 1 00:05:36.309 13:22:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.309 13:22:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.309 13:22:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.309 13:22:06 event -- scripts/common.sh@353 -- # local d=1 00:05:36.309 13:22:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.309 13:22:06 event -- scripts/common.sh@355 -- # echo 1 00:05:36.309 13:22:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.309 13:22:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.309 13:22:06 event -- scripts/common.sh@353 -- # local d=2 00:05:36.309 13:22:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.309 13:22:06 event -- scripts/common.sh@355 -- # echo 2 00:05:36.309 13:22:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.309 13:22:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.309 13:22:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.309 13:22:06 event -- scripts/common.sh@368 -- # return 0 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.309 --rc genhtml_branch_coverage=1 00:05:36.309 --rc genhtml_function_coverage=1 00:05:36.309 --rc genhtml_legend=1 00:05:36.309 --rc geninfo_all_blocks=1 00:05:36.309 --rc geninfo_unexecuted_blocks=1 00:05:36.309 00:05:36.309 ' 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.309 --rc genhtml_branch_coverage=1 00:05:36.309 --rc genhtml_function_coverage=1 00:05:36.309 --rc genhtml_legend=1 00:05:36.309 --rc geninfo_all_blocks=1 00:05:36.309 --rc geninfo_unexecuted_blocks=1 00:05:36.309 00:05:36.309 ' 00:05:36.309 13:22:06 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.310 --rc genhtml_branch_coverage=1 00:05:36.310 --rc genhtml_function_coverage=1 00:05:36.310 --rc genhtml_legend=1 00:05:36.310 --rc geninfo_all_blocks=1 00:05:36.310 --rc geninfo_unexecuted_blocks=1 00:05:36.310 00:05:36.310 ' 00:05:36.310 13:22:06 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.310 --rc genhtml_branch_coverage=1 00:05:36.310 --rc genhtml_function_coverage=1 00:05:36.310 --rc genhtml_legend=1 00:05:36.310 --rc geninfo_all_blocks=1 00:05:36.310 --rc geninfo_unexecuted_blocks=1 00:05:36.310 00:05:36.310 ' 00:05:36.310 13:22:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:36.310 13:22:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.310 13:22:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.310 13:22:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:36.310 13:22:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.310 13:22:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.310 ************************************ 00:05:36.310 START TEST event_perf 00:05:36.310 ************************************ 00:05:36.310 13:22:06 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.310 Running I/O for 1 seconds...[2024-11-18 13:22:06.248402] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:36.310 [2024-11-18 13:22:06.248632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58112 ] 00:05:36.569 [2024-11-18 13:22:06.433516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.569 [2024-11-18 13:22:06.558956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.569 [2024-11-18 13:22:06.559166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.569 [2024-11-18 13:22:06.559268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.569 Running I/O for 1 seconds...[2024-11-18 13:22:06.559315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.945 00:05:37.945 lcore 0: 203288 00:05:37.946 lcore 1: 203289 00:05:37.946 lcore 2: 203292 00:05:37.946 lcore 3: 203286 00:05:37.946 done. 00:05:37.946 00:05:37.946 real 0m1.590s 00:05:37.946 user 0m4.362s 00:05:37.946 sys 0m0.107s 00:05:37.946 13:22:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.946 13:22:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.946 ************************************ 00:05:37.946 END TEST event_perf 00:05:37.946 ************************************ 00:05:37.946 13:22:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.946 13:22:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.946 13:22:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.946 13:22:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.946 ************************************ 00:05:37.946 START TEST event_reactor 00:05:37.946 ************************************ 00:05:37.946 13:22:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:37.946 [2024-11-18 13:22:07.913310] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:37.946 [2024-11-18 13:22:07.913461] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58151 ] 00:05:38.204 [2024-11-18 13:22:08.084761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.204 [2024-11-18 13:22:08.203498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.579 test_start 00:05:39.579 oneshot 00:05:39.579 tick 100 00:05:39.579 tick 100 00:05:39.579 tick 250 00:05:39.579 tick 100 00:05:39.579 tick 100 00:05:39.579 tick 100 00:05:39.579 tick 250 00:05:39.579 tick 500 00:05:39.579 tick 100 00:05:39.579 tick 100 00:05:39.579 tick 250 00:05:39.579 tick 100 00:05:39.579 tick 100 00:05:39.579 test_end 00:05:39.579 ************************************ 00:05:39.579 END TEST event_reactor 00:05:39.579 ************************************ 00:05:39.579 00:05:39.579 real 0m1.563s 00:05:39.579 user 0m1.368s 00:05:39.580 sys 0m0.087s 00:05:39.580 13:22:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.580 13:22:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.580 13:22:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.580 13:22:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.580 13:22:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.580 13:22:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.580 ************************************ 00:05:39.580 START TEST event_reactor_perf 00:05:39.580 ************************************ 00:05:39.580 13:22:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.580 [2024-11-18 13:22:09.543622] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:39.580 [2024-11-18 13:22:09.543755] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:05:39.838 [2024-11-18 13:22:09.728630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.838 [2024-11-18 13:22:09.845877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.238 test_start 00:05:41.238 test_end 00:05:41.238 Performance: 370198 events per second 00:05:41.238 00:05:41.238 real 0m1.570s 00:05:41.238 user 0m1.379s 00:05:41.238 sys 0m0.083s 00:05:41.238 ************************************ 00:05:41.238 END TEST event_reactor_perf 00:05:41.238 ************************************ 00:05:41.238 13:22:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.238 13:22:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.238 13:22:11 event -- event/event.sh@49 -- # uname -s 00:05:41.238 13:22:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.238 13:22:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.238 13:22:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.238 13:22:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.238 13:22:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.238 ************************************ 00:05:41.238 START TEST event_scheduler 00:05:41.238 ************************************ 00:05:41.238 13:22:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.238 * Looking for test storage... 00:05:41.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:41.238 13:22:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.238 13:22:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.238 13:22:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.499 13:22:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.499 --rc genhtml_branch_coverage=1 00:05:41.499 --rc genhtml_function_coverage=1 00:05:41.499 --rc genhtml_legend=1 00:05:41.499 --rc geninfo_all_blocks=1 00:05:41.499 --rc geninfo_unexecuted_blocks=1 00:05:41.499 00:05:41.499 ' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.499 --rc genhtml_branch_coverage=1 00:05:41.499 --rc genhtml_function_coverage=1 00:05:41.499 --rc genhtml_legend=1 00:05:41.499 --rc geninfo_all_blocks=1 00:05:41.499 --rc geninfo_unexecuted_blocks=1 00:05:41.499 00:05:41.499 ' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.499 --rc genhtml_branch_coverage=1 00:05:41.499 --rc genhtml_function_coverage=1 00:05:41.499 --rc genhtml_legend=1 00:05:41.499 --rc geninfo_all_blocks=1 00:05:41.499 --rc geninfo_unexecuted_blocks=1 00:05:41.499 00:05:41.499 ' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.499 --rc genhtml_branch_coverage=1 00:05:41.499 --rc genhtml_function_coverage=1 00:05:41.499 --rc genhtml_legend=1 00:05:41.499 --rc geninfo_all_blocks=1 00:05:41.499 --rc geninfo_unexecuted_blocks=1 00:05:41.499 00:05:41.499 ' 00:05:41.499 13:22:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.499 13:22:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58264 00:05:41.499 13:22:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.499 13:22:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.499 13:22:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58264 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58264 ']' 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.499 13:22:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.499 [2024-11-18 13:22:11.436315] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:41.499 [2024-11-18 13:22:11.436931] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58264 ] 00:05:41.758 [2024-11-18 13:22:11.591929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.758 [2024-11-18 13:22:11.709302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.758 [2024-11-18 13:22:11.709452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.758 [2024-11-18 13:22:11.709582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.758 [2024-11-18 13:22:11.709621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.327 13:22:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.327 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.327 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.327 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.327 POWER: Cannot set governor of lcore 0 to performance 00:05:42.327 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.327 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.327 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.327 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.327 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:42.327 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:42.327 POWER: Unable to set Power Management Environment for lcore 0 00:05:42.327 [2024-11-18 13:22:12.270313] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:42.327 [2024-11-18 13:22:12.270336] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:42.327 [2024-11-18 13:22:12.270348] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.327 [2024-11-18 13:22:12.270386] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.327 [2024-11-18 13:22:12.270397] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.327 [2024-11-18 13:22:12.270407] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.327 13:22:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.327 13:22:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 [2024-11-18 13:22:12.592959] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.587 13:22:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.587 13:22:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.587 13:22:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.587 13:22:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.587 13:22:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 ************************************ 00:05:42.587 START TEST scheduler_create_thread 00:05:42.587 ************************************ 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 2 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.587 3 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.587 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 4 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 5 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 6 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 7 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 8 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 9 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 10 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.847 13:22:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.227 13:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.227 13:22:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:44.227 13:22:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:44.227 13:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.227 13:22:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.233 ************************************ 00:05:45.233 END TEST scheduler_create_thread 00:05:45.233 ************************************ 00:05:45.233 13:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.233 00:05:45.233 real 0m2.619s 00:05:45.233 user 0m0.028s 00:05:45.233 sys 0m0.010s 00:05:45.233 13:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.233 13:22:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.233 13:22:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:45.233 13:22:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58264 00:05:45.233 13:22:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58264 ']' 00:05:45.233 13:22:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58264 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58264 00:05:45.491 killing process with pid 58264 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:45.491 13:22:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58264' 00:05:45.492 13:22:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58264 00:05:45.492 13:22:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58264 00:05:45.751 [2024-11-18 13:22:15.705400] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.129 ************************************ 00:05:47.129 END TEST event_scheduler 00:05:47.129 ************************************ 00:05:47.129 00:05:47.129 real 0m5.715s 00:05:47.129 user 0m9.791s 00:05:47.129 sys 0m0.496s 00:05:47.129 13:22:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.129 13:22:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 13:22:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.129 13:22:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.129 13:22:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.129 13:22:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.129 13:22:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 ************************************ 00:05:47.129 START TEST app_repeat 00:05:47.129 ************************************ 00:05:47.129 13:22:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.129 13:22:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58370 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.130 Process app_repeat pid: 58370 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58370' 00:05:47.130 spdk_app_start Round 0 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.130 13:22:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.130 13:22:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.130 [2024-11-18 13:22:16.964296] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:47.130 [2024-11-18 13:22:16.964409] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58370 ] 00:05:47.130 [2024-11-18 13:22:17.137254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.389 [2024-11-18 13:22:17.252578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.389 [2024-11-18 13:22:17.252614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.958 13:22:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.958 13:22:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.958 13:22:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.217 Malloc0 00:05:48.217 13:22:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.476 Malloc1 00:05:48.476 13:22:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.476 13:22:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.788 /dev/nbd0 00:05:48.788 13:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.788 13:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.788 13:22:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.788 13:22:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.788 13:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.788 13:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.789 1+0 records in 00:05:48.789 1+0 records out 00:05:48.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410541 s, 10.0 MB/s 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.789 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.789 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.789 13:22:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.789 /dev/nbd1 00:05:48.789 13:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.789 13:22:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.789 13:22:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.789 1+0 records in 00:05:48.789 1+0 records out 00:05:48.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365106 s, 11.2 MB/s 00:05:49.048 13:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.048 13:22:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.048 13:22:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.048 13:22:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.048 13:22:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.048 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.048 13:22:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.048 13:22:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.048 13:22:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.048 13:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.048 13:22:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.048 { 00:05:49.048 "nbd_device": "/dev/nbd0", 00:05:49.048 "bdev_name": "Malloc0" 00:05:49.048 }, 00:05:49.048 { 00:05:49.048 "nbd_device": "/dev/nbd1", 00:05:49.048 "bdev_name": "Malloc1" 00:05:49.048 } 00:05:49.048 ]' 00:05:49.048 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.048 { 00:05:49.048 "nbd_device": "/dev/nbd0", 00:05:49.048 "bdev_name": "Malloc0" 00:05:49.048 }, 00:05:49.048 { 00:05:49.048 "nbd_device": "/dev/nbd1", 00:05:49.048 "bdev_name": "Malloc1" 00:05:49.048 } 00:05:49.048 ]' 00:05:49.048 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.306 /dev/nbd1' 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.306 /dev/nbd1' 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.306 256+0 records in 00:05:49.306 256+0 records out 00:05:49.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00882176 s, 119 MB/s 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.306 256+0 records in 00:05:49.306 256+0 records out 00:05:49.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264144 s, 39.7 MB/s 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.306 13:22:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.306 256+0 records in 00:05:49.306 256+0 records out 00:05:49.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258858 s, 40.5 MB/s 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.307 13:22:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.564 13:22:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.823 13:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.823 13:22:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.823 13:22:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.824 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.084 13:22:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.084 13:22:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.344 13:22:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.724 [2024-11-18 13:22:21.410477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.724 [2024-11-18 13:22:21.525091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.724 [2024-11-18 13:22:21.525095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.724 [2024-11-18 13:22:21.717959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.724 [2024-11-18 13:22:21.718033] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.657 13:22:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.657 spdk_app_start Round 1 00:05:53.657 13:22:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.657 13:22:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.657 13:22:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.657 13:22:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.917 Malloc0 00:05:53.917 13:22:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.178 Malloc1 00:05:54.178 13:22:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.178 13:22:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.437 /dev/nbd0 00:05:54.437 13:22:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.437 13:22:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.437 1+0 records in 00:05:54.437 1+0 records out 00:05:54.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444243 s, 9.2 MB/s 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.437 13:22:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.437 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.437 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.438 13:22:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.698 /dev/nbd1 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.698 1+0 records in 00:05:54.698 1+0 records out 00:05:54.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289743 s, 14.1 MB/s 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.698 13:22:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.698 13:22:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.958 { 00:05:54.958 "nbd_device": "/dev/nbd0", 00:05:54.958 "bdev_name": "Malloc0" 00:05:54.958 }, 00:05:54.958 { 00:05:54.958 "nbd_device": "/dev/nbd1", 00:05:54.958 "bdev_name": "Malloc1" 00:05:54.958 } 00:05:54.958 ]' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.958 { 00:05:54.958 "nbd_device": "/dev/nbd0", 00:05:54.958 "bdev_name": "Malloc0" 00:05:54.958 }, 00:05:54.958 { 00:05:54.958 "nbd_device": "/dev/nbd1", 00:05:54.958 "bdev_name": "Malloc1" 00:05:54.958 } 00:05:54.958 ]' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.958 /dev/nbd1' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.958 /dev/nbd1' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.958 256+0 records in 00:05:54.958 256+0 records out 00:05:54.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127158 s, 82.5 MB/s 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.958 256+0 records in 00:05:54.958 256+0 records out 00:05:54.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023961 s, 43.8 MB/s 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.958 256+0 records in 00:05:54.958 256+0 records out 00:05:54.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277518 s, 37.8 MB/s 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.958 13:22:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.218 13:22:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.477 13:22:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.737 13:22:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.737 13:22:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.996 13:22:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.376 [2024-11-18 13:22:27.184079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.376 [2024-11-18 13:22:27.297009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.376 [2024-11-18 13:22:27.297034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.636 [2024-11-18 13:22:27.487969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.636 [2024-11-18 13:22:27.488041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.018 spdk_app_start Round 2 00:05:59.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.018 13:22:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.018 13:22:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.018 13:22:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.018 13:22:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.279 13:22:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.279 13:22:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.279 13:22:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.541 Malloc0 00:05:59.541 13:22:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.800 Malloc1 00:05:59.800 13:22:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.800 13:22:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.060 /dev/nbd0 00:06:00.060 13:22:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.060 13:22:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.060 1+0 records in 00:06:00.060 1+0 records out 00:06:00.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204231 s, 20.1 MB/s 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.060 13:22:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.060 13:22:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.060 13:22:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.060 13:22:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.319 /dev/nbd1 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.319 1+0 records in 00:06:00.319 1+0 records out 00:06:00.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452166 s, 9.1 MB/s 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.319 13:22:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.319 13:22:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.579 { 00:06:00.579 "nbd_device": "/dev/nbd0", 00:06:00.579 "bdev_name": "Malloc0" 00:06:00.579 }, 00:06:00.579 { 00:06:00.579 "nbd_device": "/dev/nbd1", 00:06:00.579 "bdev_name": "Malloc1" 00:06:00.579 } 00:06:00.579 ]' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.579 { 00:06:00.579 "nbd_device": "/dev/nbd0", 00:06:00.579 "bdev_name": "Malloc0" 00:06:00.579 }, 00:06:00.579 { 00:06:00.579 "nbd_device": "/dev/nbd1", 00:06:00.579 "bdev_name": "Malloc1" 00:06:00.579 } 00:06:00.579 ]' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.579 /dev/nbd1' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.579 /dev/nbd1' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.579 256+0 records in 00:06:00.579 256+0 records out 00:06:00.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653139 s, 161 MB/s 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.579 256+0 records in 00:06:00.579 256+0 records out 00:06:00.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216848 s, 48.4 MB/s 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.579 13:22:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.839 256+0 records in 00:06:00.839 256+0 records out 00:06:00.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286641 s, 36.6 MB/s 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.839 13:22:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.840 13:22:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.103 13:22:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.103 13:22:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.103 13:22:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.103 13:22:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.103 13:22:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.375 13:22:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.375 13:22:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.944 13:22:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.325 [2024-11-18 13:22:32.939400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.325 [2024-11-18 13:22:33.048824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.325 [2024-11-18 13:22:33.048827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.325 [2024-11-18 13:22:33.244677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.325 [2024-11-18 13:22:33.244756] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.272 13:22:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58370 /var/tmp/spdk-nbd.sock 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58370 ']' 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.273 13:22:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.273 13:22:35 event.app_repeat -- event/event.sh@39 -- # killprocess 58370 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58370 ']' 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58370 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58370 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58370' 00:06:05.273 killing process with pid 58370 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58370 00:06:05.273 13:22:35 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58370 00:06:06.212 spdk_app_start is called in Round 0. 00:06:06.212 Shutdown signal received, stop current app iteration 00:06:06.212 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:06.212 spdk_app_start is called in Round 1. 00:06:06.212 Shutdown signal received, stop current app iteration 00:06:06.212 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:06.212 spdk_app_start is called in Round 2. 00:06:06.212 Shutdown signal received, stop current app iteration 00:06:06.212 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:06.212 spdk_app_start is called in Round 3. 00:06:06.212 Shutdown signal received, stop current app iteration 00:06:06.212 13:22:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.212 13:22:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.212 00:06:06.212 real 0m19.217s 00:06:06.212 user 0m41.186s 00:06:06.212 sys 0m2.691s 00:06:06.212 13:22:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.212 13:22:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.212 ************************************ 00:06:06.212 END TEST app_repeat 00:06:06.212 ************************************ 00:06:06.212 13:22:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.212 13:22:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.212 13:22:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.212 13:22:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.212 13:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.212 ************************************ 00:06:06.212 START TEST cpu_locks 00:06:06.212 ************************************ 00:06:06.212 13:22:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.473 * Looking for test storage... 00:06:06.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.473 13:22:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.473 --rc genhtml_branch_coverage=1 00:06:06.473 --rc genhtml_function_coverage=1 00:06:06.473 --rc genhtml_legend=1 00:06:06.473 --rc geninfo_all_blocks=1 00:06:06.473 --rc geninfo_unexecuted_blocks=1 00:06:06.473 00:06:06.473 ' 00:06:06.473 13:22:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.473 --rc genhtml_branch_coverage=1 00:06:06.474 --rc genhtml_function_coverage=1 00:06:06.474 --rc genhtml_legend=1 00:06:06.474 --rc geninfo_all_blocks=1 00:06:06.474 --rc geninfo_unexecuted_blocks=1 00:06:06.474 00:06:06.474 ' 00:06:06.474 13:22:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.474 --rc genhtml_branch_coverage=1 00:06:06.474 --rc genhtml_function_coverage=1 00:06:06.474 --rc genhtml_legend=1 00:06:06.474 --rc geninfo_all_blocks=1 00:06:06.474 --rc geninfo_unexecuted_blocks=1 00:06:06.474 00:06:06.474 ' 00:06:06.474 13:22:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.474 --rc genhtml_branch_coverage=1 00:06:06.474 --rc genhtml_function_coverage=1 00:06:06.474 --rc genhtml_legend=1 00:06:06.474 --rc geninfo_all_blocks=1 00:06:06.474 --rc geninfo_unexecuted_blocks=1 00:06:06.474 00:06:06.474 ' 00:06:06.474 13:22:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.474 13:22:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.474 13:22:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.474 13:22:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.474 13:22:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.474 13:22:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.474 13:22:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 ************************************ 00:06:06.474 START TEST default_locks 00:06:06.474 ************************************ 00:06:06.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58817 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58817 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58817 ']' 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.474 13:22:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.474 [2024-11-18 13:22:36.504552] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:06.474 [2024-11-18 13:22:36.504661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58817 ] 00:06:06.734 [2024-11-18 13:22:36.675516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.995 [2024-11-18 13:22:36.787510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58817 ']' 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58817 00:06:07.935 killing process with pid 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58817' 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58817 00:06:07.935 13:22:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58817 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58817 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58817 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58817 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58817 ']' 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.473 ERROR: process (pid: 58817) is no longer running 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58817) - No such process 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.473 00:06:10.473 real 0m3.969s 00:06:10.473 user 0m3.912s 00:06:10.473 sys 0m0.583s 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.473 13:22:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 ************************************ 00:06:10.473 END TEST default_locks 00:06:10.473 ************************************ 00:06:10.473 13:22:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.473 13:22:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.473 13:22:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.473 13:22:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.473 ************************************ 00:06:10.473 START TEST default_locks_via_rpc 00:06:10.473 ************************************ 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58887 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58887 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58887 ']' 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.473 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.474 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.474 13:22:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.733 [2024-11-18 13:22:40.538314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:10.733 [2024-11-18 13:22:40.538500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:06:10.733 [2024-11-18 13:22:40.718359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.991 [2024-11-18 13:22:40.831796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58887 ']' 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58887' 00:06:11.930 killing process with pid 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58887 00:06:11.930 13:22:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58887 00:06:14.492 00:06:14.492 real 0m3.899s 00:06:14.492 user 0m3.888s 00:06:14.492 sys 0m0.569s 00:06:14.492 13:22:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.492 13:22:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.492 ************************************ 00:06:14.492 END TEST default_locks_via_rpc 00:06:14.492 ************************************ 00:06:14.492 13:22:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:14.492 13:22:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.492 13:22:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.492 13:22:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.492 ************************************ 00:06:14.492 START TEST non_locking_app_on_locked_coremask 00:06:14.492 ************************************ 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58961 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.492 13:22:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.492 [2024-11-18 13:22:44.501996] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:14.492 [2024-11-18 13:22:44.502244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:06:14.752 [2024-11-18 13:22:44.678388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.752 [2024-11-18 13:22:44.794910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58977 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58977 /var/tmp/spdk2.sock 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.692 13:22:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.952 [2024-11-18 13:22:45.745029] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:15.952 [2024-11-18 13:22:45.745238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:06:15.952 [2024-11-18 13:22:45.911040] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.952 [2024-11-18 13:22:45.911091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.211 [2024-11-18 13:22:46.151381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:06:18.751 killing process with pid 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58961 00:06:18.751 13:22:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58961 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58977 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58977 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:06:24.028 killing process with pid 58977 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58977 00:06:24.028 13:22:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58977 00:06:25.947 00:06:25.947 real 0m11.576s 00:06:25.947 user 0m11.832s 00:06:25.947 sys 0m1.201s 00:06:25.947 13:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.947 13:22:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.947 ************************************ 00:06:25.947 END TEST non_locking_app_on_locked_coremask 00:06:25.947 ************************************ 00:06:26.221 13:22:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.221 13:22:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.221 13:22:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.221 13:22:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.221 ************************************ 00:06:26.221 START TEST locking_app_on_unlocked_coremask 00:06:26.221 ************************************ 00:06:26.221 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:26.221 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59129 00:06:26.221 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59129 /var/tmp/spdk.sock 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59129 ']' 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.222 13:22:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.222 [2024-11-18 13:22:56.146441] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:26.222 [2024-11-18 13:22:56.146662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:06:26.481 [2024-11-18 13:22:56.320878] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.481 [2024-11-18 13:22:56.321034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.481 [2024-11-18 13:22:56.441767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59145 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59145 /var/tmp/spdk2.sock 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.420 13:22:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.420 [2024-11-18 13:22:57.391123] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:27.421 [2024-11-18 13:22:57.391330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:06:27.679 [2024-11-18 13:22:57.564268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.940 [2024-11-18 13:22:57.789841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.475 13:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.475 13:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.475 13:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59145 00:06:30.475 13:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59145 00:06:30.475 13:22:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59129 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59129 ']' 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59129 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59129 00:06:30.475 killing process with pid 59129 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59129' 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59129 00:06:30.475 13:23:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59129 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59145 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59145 ']' 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59145 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59145 00:06:35.757 killing process with pid 59145 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59145' 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59145 00:06:35.757 13:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59145 00:06:38.295 ************************************ 00:06:38.295 END TEST locking_app_on_unlocked_coremask 00:06:38.295 ************************************ 00:06:38.295 00:06:38.295 real 0m11.677s 00:06:38.295 user 0m11.993s 00:06:38.295 sys 0m1.190s 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 13:23:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:38.295 13:23:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.295 13:23:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.295 13:23:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 ************************************ 00:06:38.295 START TEST locking_app_on_locked_coremask 00:06:38.295 ************************************ 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59302 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59302 /var/tmp/spdk.sock 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59302 ']' 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.295 13:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.295 [2024-11-18 13:23:07.890254] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:38.295 [2024-11-18 13:23:07.890474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:06:38.295 [2024-11-18 13:23:08.065392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.295 [2024-11-18 13:23:08.182413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59318 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59318 /var/tmp/spdk2.sock 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59318 /var/tmp/spdk2.sock 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:39.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59318 /var/tmp/spdk2.sock 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59318 ']' 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.293 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.293 [2024-11-18 13:23:09.150383] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:39.293 [2024-11-18 13:23:09.150600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59318 ] 00:06:39.293 [2024-11-18 13:23:09.343969] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59302 has claimed it. 00:06:39.293 [2024-11-18 13:23:09.344037] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.862 ERROR: process (pid: 59318) is no longer running 00:06:39.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59318) - No such process 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59302 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59302 00:06:39.862 13:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59302 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59302 ']' 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59302 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.123 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59302 00:06:40.382 killing process with pid 59302 00:06:40.382 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.382 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.383 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59302' 00:06:40.383 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59302 00:06:40.383 13:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59302 00:06:42.977 00:06:42.977 real 0m4.798s 00:06:42.977 user 0m5.016s 00:06:42.977 sys 0m0.771s 00:06:42.977 13:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.977 13:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.977 ************************************ 00:06:42.977 END TEST locking_app_on_locked_coremask 00:06:42.977 ************************************ 00:06:42.977 13:23:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.977 13:23:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.977 13:23:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.977 13:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.977 ************************************ 00:06:42.977 START TEST locking_overlapped_coremask 00:06:42.977 ************************************ 00:06:42.977 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:42.977 13:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59387 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59387 /var/tmp/spdk.sock 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59387 ']' 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.978 13:23:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.978 [2024-11-18 13:23:12.755910] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:42.978 [2024-11-18 13:23:12.756125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59387 ] 00:06:42.978 [2024-11-18 13:23:12.930347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.237 [2024-11-18 13:23:13.051614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.237 [2024-11-18 13:23:13.051770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.237 [2024-11-18 13:23:13.051825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59411 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59411 /var/tmp/spdk2.sock 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59411 /var/tmp/spdk2.sock 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59411 /var/tmp/spdk2.sock 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59411 ']' 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.179 13:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.179 [2024-11-18 13:23:14.049034] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:44.179 [2024-11-18 13:23:14.049254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59411 ] 00:06:44.179 [2024-11-18 13:23:14.224229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59387 has claimed it. 00:06:44.179 [2024-11-18 13:23:14.224463] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.750 ERROR: process (pid: 59411) is no longer running 00:06:44.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59411) - No such process 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59387 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59387 ']' 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59387 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59387 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59387' 00:06:44.750 killing process with pid 59387 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59387 00:06:44.750 13:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59387 00:06:47.301 00:06:47.301 real 0m4.518s 00:06:47.301 user 0m12.279s 00:06:47.301 sys 0m0.591s 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.301 ************************************ 00:06:47.301 END TEST locking_overlapped_coremask 00:06:47.301 ************************************ 00:06:47.301 13:23:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.301 13:23:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.301 13:23:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.301 13:23:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.301 ************************************ 00:06:47.301 START TEST locking_overlapped_coremask_via_rpc 00:06:47.301 ************************************ 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59475 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59475 /var/tmp/spdk.sock 00:06:47.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59475 ']' 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.301 13:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.301 [2024-11-18 13:23:17.341336] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:47.301 [2024-11-18 13:23:17.341449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59475 ] 00:06:47.560 [2024-11-18 13:23:17.518757] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.560 [2024-11-18 13:23:17.518830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.820 [2024-11-18 13:23:17.648317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.820 [2024-11-18 13:23:17.649072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.820 [2024-11-18 13:23:17.649179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59493 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59493 /var/tmp/spdk2.sock 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.757 13:23:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.757 [2024-11-18 13:23:18.676524] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:48.757 [2024-11-18 13:23:18.676637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:06:49.016 [2024-11-18 13:23:18.854923] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.016 [2024-11-18 13:23:18.855242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.285 [2024-11-18 13:23:19.151407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.285 [2024-11-18 13:23:19.151610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.286 [2024-11-18 13:23:19.151848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.825 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.825 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.826 [2024-11-18 13:23:21.325391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59475 has claimed it. 00:06:51.826 request: 00:06:51.826 { 00:06:51.826 "method": "framework_enable_cpumask_locks", 00:06:51.826 "req_id": 1 00:06:51.826 } 00:06:51.826 Got JSON-RPC error response 00:06:51.826 response: 00:06:51.826 { 00:06:51.826 "code": -32603, 00:06:51.826 "message": "Failed to claim CPU core: 2" 00:06:51.826 } 00:06:51.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59475 /var/tmp/spdk.sock 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59475 ']' 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59493 /var/tmp/spdk2.sock 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.826 00:06:51.826 real 0m4.581s 00:06:51.826 user 0m1.384s 00:06:51.826 sys 0m0.217s 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.826 13:23:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.826 ************************************ 00:06:51.826 END TEST locking_overlapped_coremask_via_rpc 00:06:51.826 ************************************ 00:06:51.826 13:23:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:51.826 13:23:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59475 ]] 00:06:51.826 13:23:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59475 00:06:51.826 13:23:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59475 ']' 00:06:51.826 13:23:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59475 00:06:51.826 13:23:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:51.826 13:23:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.826 13:23:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59475 00:06:52.085 13:23:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.085 13:23:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.085 13:23:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59475' 00:06:52.085 killing process with pid 59475 00:06:52.085 13:23:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59475 00:06:52.085 13:23:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59475 00:06:55.372 13:23:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59493 ]] 00:06:55.372 13:23:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59493 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59493 00:06:55.372 killing process with pid 59493 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59493' 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59493 00:06:55.372 13:23:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59493 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59475 ]] 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59475 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59475 ']' 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59475 00:06:57.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59475) - No such process 00:06:57.908 Process with pid 59475 is not found 00:06:57.908 Process with pid 59493 is not found 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59475 is not found' 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59493 ]] 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59493 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:57.908 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59493) - No such process 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59493 is not found' 00:06:57.908 13:23:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.908 00:06:57.908 real 0m51.493s 00:06:57.908 user 1m30.175s 00:06:57.908 sys 0m6.553s 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.908 13:23:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 ************************************ 00:06:57.908 END TEST cpu_locks 00:06:57.908 ************************************ 00:06:57.908 00:06:57.908 real 1m21.756s 00:06:57.908 user 2m28.503s 00:06:57.908 sys 0m10.395s 00:06:57.908 13:23:27 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.908 13:23:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 ************************************ 00:06:57.908 END TEST event 00:06:57.908 ************************************ 00:06:57.908 13:23:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.908 13:23:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.908 13:23:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.908 13:23:27 -- common/autotest_common.sh@10 -- # set +x 00:06:57.908 ************************************ 00:06:57.908 START TEST thread 00:06:57.908 ************************************ 00:06:57.908 13:23:27 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.908 * Looking for test storage... 00:06:57.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.908 13:23:27 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.908 13:23:27 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.908 13:23:27 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.167 13:23:27 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.168 13:23:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.168 13:23:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.168 13:23:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.168 13:23:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.168 13:23:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.168 13:23:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.168 13:23:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.168 13:23:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.168 13:23:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.168 13:23:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.168 13:23:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.168 13:23:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.168 13:23:27 thread -- scripts/common.sh@345 -- # : 1 00:06:58.168 13:23:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.168 13:23:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.168 13:23:27 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.168 13:23:28 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.168 13:23:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.168 13:23:28 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.168 13:23:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.168 13:23:28 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.168 13:23:28 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.168 13:23:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.168 13:23:28 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.168 13:23:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.168 13:23:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.168 13:23:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.168 13:23:28 thread -- scripts/common.sh@368 -- # return 0 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.168 --rc genhtml_branch_coverage=1 00:06:58.168 --rc genhtml_function_coverage=1 00:06:58.168 --rc genhtml_legend=1 00:06:58.168 --rc geninfo_all_blocks=1 00:06:58.168 --rc geninfo_unexecuted_blocks=1 00:06:58.168 00:06:58.168 ' 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.168 --rc genhtml_branch_coverage=1 00:06:58.168 --rc genhtml_function_coverage=1 00:06:58.168 --rc genhtml_legend=1 00:06:58.168 --rc geninfo_all_blocks=1 00:06:58.168 --rc geninfo_unexecuted_blocks=1 00:06:58.168 00:06:58.168 ' 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.168 --rc genhtml_branch_coverage=1 00:06:58.168 --rc genhtml_function_coverage=1 00:06:58.168 --rc genhtml_legend=1 00:06:58.168 --rc geninfo_all_blocks=1 00:06:58.168 --rc geninfo_unexecuted_blocks=1 00:06:58.168 00:06:58.168 ' 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.168 --rc genhtml_branch_coverage=1 00:06:58.168 --rc genhtml_function_coverage=1 00:06:58.168 --rc genhtml_legend=1 00:06:58.168 --rc geninfo_all_blocks=1 00:06:58.168 --rc geninfo_unexecuted_blocks=1 00:06:58.168 00:06:58.168 ' 00:06:58.168 13:23:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.168 13:23:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.168 ************************************ 00:06:58.168 START TEST thread_poller_perf 00:06:58.168 ************************************ 00:06:58.168 13:23:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.168 [2024-11-18 13:23:28.078959] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:58.168 [2024-11-18 13:23:28.079176] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:06:58.427 [2024-11-18 13:23:28.252785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.427 [2024-11-18 13:23:28.397707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.427 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.823 [2024-11-18T13:23:29.877Z] ====================================== 00:06:59.823 [2024-11-18T13:23:29.877Z] busy:2302589544 (cyc) 00:06:59.823 [2024-11-18T13:23:29.877Z] total_run_count: 372000 00:06:59.823 [2024-11-18T13:23:29.877Z] tsc_hz: 2290000000 (cyc) 00:06:59.823 [2024-11-18T13:23:29.877Z] ====================================== 00:06:59.823 [2024-11-18T13:23:29.877Z] poller_cost: 6189 (cyc), 2702 (nsec) 00:06:59.823 00:06:59.823 real 0m1.638s 00:06:59.823 user 0m1.422s 00:06:59.823 sys 0m0.109s 00:06:59.823 ************************************ 00:06:59.823 END TEST thread_poller_perf 00:06:59.823 ************************************ 00:06:59.823 13:23:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.823 13:23:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.823 13:23:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.823 13:23:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.823 13:23:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.823 13:23:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.823 ************************************ 00:06:59.823 START TEST thread_poller_perf 00:06:59.823 ************************************ 00:06:59.823 13:23:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.823 [2024-11-18 13:23:29.779156] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:59.823 [2024-11-18 13:23:29.779324] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59741 ] 00:07:00.081 [2024-11-18 13:23:29.954007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.081 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.081 [2024-11-18 13:23:30.103747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.460 [2024-11-18T13:23:31.514Z] ====================================== 00:07:01.460 [2024-11-18T13:23:31.514Z] busy:2294426532 (cyc) 00:07:01.460 [2024-11-18T13:23:31.514Z] total_run_count: 4894000 00:07:01.460 [2024-11-18T13:23:31.514Z] tsc_hz: 2290000000 (cyc) 00:07:01.460 [2024-11-18T13:23:31.514Z] ====================================== 00:07:01.460 [2024-11-18T13:23:31.514Z] poller_cost: 468 (cyc), 204 (nsec) 00:07:01.460 00:07:01.460 real 0m1.634s 00:07:01.460 user 0m1.406s 00:07:01.460 sys 0m0.120s 00:07:01.460 ************************************ 00:07:01.460 END TEST thread_poller_perf 00:07:01.460 ************************************ 00:07:01.460 13:23:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.460 13:23:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.460 13:23:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.460 ************************************ 00:07:01.460 END TEST thread 00:07:01.460 ************************************ 00:07:01.460 00:07:01.460 real 0m3.620s 00:07:01.460 user 0m2.993s 00:07:01.460 sys 0m0.426s 00:07:01.460 13:23:31 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.460 13:23:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.460 13:23:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.460 13:23:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.460 13:23:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.460 13:23:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.460 13:23:31 -- common/autotest_common.sh@10 -- # set +x 00:07:01.460 ************************************ 00:07:01.460 START TEST app_cmdline 00:07:01.460 ************************************ 00:07:01.460 13:23:31 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.719 * Looking for test storage... 00:07:01.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.719 13:23:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.719 --rc genhtml_branch_coverage=1 00:07:01.719 --rc genhtml_function_coverage=1 00:07:01.719 --rc genhtml_legend=1 00:07:01.719 --rc geninfo_all_blocks=1 00:07:01.719 --rc geninfo_unexecuted_blocks=1 00:07:01.719 00:07:01.719 ' 00:07:01.719 13:23:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.719 13:23:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59830 00:07:01.719 13:23:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.719 13:23:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59830 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59830 ']' 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.719 13:23:31 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.720 13:23:31 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.720 13:23:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.978 [2024-11-18 13:23:31.825880] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:01.978 [2024-11-18 13:23:31.826151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:07:01.978 [2024-11-18 13:23:32.010369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.236 [2024-11-18 13:23:32.166091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.174 13:23:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.174 13:23:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:03.174 13:23:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:03.434 { 00:07:03.434 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:07:03.434 "fields": { 00:07:03.434 "major": 25, 00:07:03.434 "minor": 1, 00:07:03.434 "patch": 0, 00:07:03.434 "suffix": "-pre", 00:07:03.434 "commit": "d47eb51c9" 00:07:03.434 } 00:07:03.434 } 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:03.434 13:23:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:03.434 13:23:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.694 request: 00:07:03.694 { 00:07:03.694 "method": "env_dpdk_get_mem_stats", 00:07:03.694 "req_id": 1 00:07:03.694 } 00:07:03.694 Got JSON-RPC error response 00:07:03.694 response: 00:07:03.694 { 00:07:03.694 "code": -32601, 00:07:03.694 "message": "Method not found" 00:07:03.694 } 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.694 13:23:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59830 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59830 ']' 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59830 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.694 13:23:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59830 00:07:03.954 killing process with pid 59830 00:07:03.954 13:23:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.954 13:23:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.954 13:23:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59830' 00:07:03.954 13:23:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 59830 00:07:03.954 13:23:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 59830 00:07:06.494 ************************************ 00:07:06.494 END TEST app_cmdline 00:07:06.494 ************************************ 00:07:06.494 00:07:06.494 real 0m5.006s 00:07:06.494 user 0m5.038s 00:07:06.494 sys 0m0.811s 00:07:06.494 13:23:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.494 13:23:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.494 13:23:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.494 13:23:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.494 13:23:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.494 13:23:36 -- common/autotest_common.sh@10 -- # set +x 00:07:06.755 ************************************ 00:07:06.755 START TEST version 00:07:06.755 ************************************ 00:07:06.755 13:23:36 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.755 * Looking for test storage... 00:07:06.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.755 13:23:36 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.755 13:23:36 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.755 13:23:36 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.755 13:23:36 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.755 13:23:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.755 13:23:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.755 13:23:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.755 13:23:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.755 13:23:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.755 13:23:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.755 13:23:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.755 13:23:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.755 13:23:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.755 13:23:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.755 13:23:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.755 13:23:36 version -- scripts/common.sh@344 -- # case "$op" in 00:07:06.755 13:23:36 version -- scripts/common.sh@345 -- # : 1 00:07:06.755 13:23:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.755 13:23:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.755 13:23:36 version -- scripts/common.sh@365 -- # decimal 1 00:07:06.755 13:23:36 version -- scripts/common.sh@353 -- # local d=1 00:07:06.755 13:23:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.755 13:23:36 version -- scripts/common.sh@355 -- # echo 1 00:07:06.755 13:23:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.755 13:23:36 version -- scripts/common.sh@366 -- # decimal 2 00:07:06.755 13:23:36 version -- scripts/common.sh@353 -- # local d=2 00:07:06.755 13:23:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.755 13:23:36 version -- scripts/common.sh@355 -- # echo 2 00:07:06.755 13:23:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.756 13:23:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.756 13:23:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.756 13:23:36 version -- scripts/common.sh@368 -- # return 0 00:07:06.756 13:23:36 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.756 13:23:36 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.756 --rc genhtml_branch_coverage=1 00:07:06.756 --rc genhtml_function_coverage=1 00:07:06.756 --rc genhtml_legend=1 00:07:06.756 --rc geninfo_all_blocks=1 00:07:06.756 --rc geninfo_unexecuted_blocks=1 00:07:06.756 00:07:06.756 ' 00:07:06.756 13:23:36 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.756 --rc genhtml_branch_coverage=1 00:07:06.756 --rc genhtml_function_coverage=1 00:07:06.756 --rc genhtml_legend=1 00:07:06.756 --rc geninfo_all_blocks=1 00:07:06.756 --rc geninfo_unexecuted_blocks=1 00:07:06.756 00:07:06.756 ' 00:07:06.756 13:23:36 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.756 --rc genhtml_branch_coverage=1 00:07:06.756 --rc genhtml_function_coverage=1 00:07:06.756 --rc genhtml_legend=1 00:07:06.756 --rc geninfo_all_blocks=1 00:07:06.756 --rc geninfo_unexecuted_blocks=1 00:07:06.756 00:07:06.756 ' 00:07:06.756 13:23:36 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.756 --rc genhtml_branch_coverage=1 00:07:06.756 --rc genhtml_function_coverage=1 00:07:06.756 --rc genhtml_legend=1 00:07:06.756 --rc geninfo_all_blocks=1 00:07:06.756 --rc geninfo_unexecuted_blocks=1 00:07:06.756 00:07:06.756 ' 00:07:06.756 13:23:36 version -- app/version.sh@17 -- # get_header_version major 00:07:06.756 13:23:36 version -- app/version.sh@14 -- # cut -f2 00:07:06.756 13:23:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.756 13:23:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.756 13:23:36 version -- app/version.sh@17 -- # major=25 00:07:07.014 13:23:36 version -- app/version.sh@18 -- # get_header_version minor 00:07:07.014 13:23:36 version -- app/version.sh@14 -- # cut -f2 00:07:07.014 13:23:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.014 13:23:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.014 13:23:36 version -- app/version.sh@18 -- # minor=1 00:07:07.014 13:23:36 version -- app/version.sh@19 -- # get_header_version patch 00:07:07.014 13:23:36 version -- app/version.sh@14 -- # cut -f2 00:07:07.014 13:23:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.014 13:23:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.014 13:23:36 version -- app/version.sh@19 -- # patch=0 00:07:07.014 13:23:36 version -- app/version.sh@20 -- # get_header_version suffix 00:07:07.015 13:23:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:07.015 13:23:36 version -- app/version.sh@14 -- # cut -f2 00:07:07.015 13:23:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:07.015 13:23:36 version -- app/version.sh@20 -- # suffix=-pre 00:07:07.015 13:23:36 version -- app/version.sh@22 -- # version=25.1 00:07:07.015 13:23:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:07.015 13:23:36 version -- app/version.sh@28 -- # version=25.1rc0 00:07:07.015 13:23:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:07.015 13:23:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:07.015 13:23:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:07.015 13:23:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:07.015 ************************************ 00:07:07.015 END TEST version 00:07:07.015 ************************************ 00:07:07.015 00:07:07.015 real 0m0.336s 00:07:07.015 user 0m0.206s 00:07:07.015 sys 0m0.187s 00:07:07.015 13:23:36 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.015 13:23:36 version -- common/autotest_common.sh@10 -- # set +x 00:07:07.015 13:23:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:07.015 13:23:36 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:07.015 13:23:36 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:07.015 13:23:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.015 13:23:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.015 13:23:36 -- common/autotest_common.sh@10 -- # set +x 00:07:07.015 ************************************ 00:07:07.015 START TEST bdev_raid 00:07:07.015 ************************************ 00:07:07.015 13:23:36 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:07.274 * Looking for test storage... 00:07:07.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.274 13:23:37 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.274 --rc genhtml_branch_coverage=1 00:07:07.274 --rc genhtml_function_coverage=1 00:07:07.274 --rc genhtml_legend=1 00:07:07.274 --rc geninfo_all_blocks=1 00:07:07.274 --rc geninfo_unexecuted_blocks=1 00:07:07.274 00:07:07.274 ' 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.274 --rc genhtml_branch_coverage=1 00:07:07.274 --rc genhtml_function_coverage=1 00:07:07.274 --rc genhtml_legend=1 00:07:07.274 --rc geninfo_all_blocks=1 00:07:07.274 --rc geninfo_unexecuted_blocks=1 00:07:07.274 00:07:07.274 ' 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.274 --rc genhtml_branch_coverage=1 00:07:07.274 --rc genhtml_function_coverage=1 00:07:07.274 --rc genhtml_legend=1 00:07:07.274 --rc geninfo_all_blocks=1 00:07:07.274 --rc geninfo_unexecuted_blocks=1 00:07:07.274 00:07:07.274 ' 00:07:07.274 13:23:37 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.274 --rc genhtml_branch_coverage=1 00:07:07.274 --rc genhtml_function_coverage=1 00:07:07.274 --rc genhtml_legend=1 00:07:07.274 --rc geninfo_all_blocks=1 00:07:07.274 --rc geninfo_unexecuted_blocks=1 00:07:07.274 00:07:07.274 ' 00:07:07.274 13:23:37 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:07.275 13:23:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:07.275 13:23:37 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:07.275 13:23:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:07.275 13:23:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:07.275 13:23:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:07.275 13:23:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:07.275 13:23:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.275 13:23:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.275 13:23:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.275 ************************************ 00:07:07.275 START TEST raid1_resize_data_offset_test 00:07:07.275 ************************************ 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60023 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60023' 00:07:07.275 Process raid pid: 60023 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60023 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60023 ']' 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.275 13:23:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.275 [2024-11-18 13:23:37.308126] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:07.275 [2024-11-18 13:23:37.308362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.535 [2024-11-18 13:23:37.482847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.793 [2024-11-18 13:23:37.624263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.051 [2024-11-18 13:23:37.867788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.051 [2024-11-18 13:23:37.867979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.310 malloc0 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.310 malloc1 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.310 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 null0 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 [2024-11-18 13:23:38.369938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:08.569 [2024-11-18 13:23:38.372227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.569 [2024-11-18 13:23:38.372274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:08.569 [2024-11-18 13:23:38.372443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.569 [2024-11-18 13:23:38.372459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:08.569 [2024-11-18 13:23:38.372750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:08.569 [2024-11-18 13:23:38.372957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.569 [2024-11-18 13:23:38.372972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:08.569 [2024-11-18 13:23:38.373165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 [2024-11-18 13:23:38.433845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.569 13:23:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 malloc2 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 [2024-11-18 13:23:39.103507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:09.137 [2024-11-18 13:23:39.124458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.137 [2024-11-18 13:23:39.126653] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60023 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60023 ']' 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60023 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.137 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60023 00:07:09.394 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.394 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.394 killing process with pid 60023 00:07:09.395 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60023' 00:07:09.395 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60023 00:07:09.395 13:23:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60023 00:07:09.395 [2024-11-18 13:23:39.221702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.395 [2024-11-18 13:23:39.222569] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:09.395 [2024-11-18 13:23:39.222726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.395 [2024-11-18 13:23:39.222751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:09.395 [2024-11-18 13:23:39.265002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.395 [2024-11-18 13:23:39.265429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.395 [2024-11-18 13:23:39.265450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:11.293 [2024-11-18 13:23:41.322965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.668 ************************************ 00:07:12.668 END TEST raid1_resize_data_offset_test 00:07:12.668 ************************************ 00:07:12.668 13:23:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:12.668 00:07:12.668 real 0m5.355s 00:07:12.668 user 0m5.062s 00:07:12.668 sys 0m0.758s 00:07:12.668 13:23:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.668 13:23:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.668 13:23:42 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:12.668 13:23:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.668 13:23:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.668 13:23:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.668 ************************************ 00:07:12.668 START TEST raid0_resize_superblock_test 00:07:12.668 ************************************ 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60112 00:07:12.668 Process raid pid: 60112 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60112' 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60112 00:07:12.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60112 ']' 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.668 13:23:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.927 [2024-11-18 13:23:42.740179] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:12.927 [2024-11-18 13:23:42.740922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.927 [2024-11-18 13:23:42.907593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.185 [2024-11-18 13:23:43.048261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.444 [2024-11-18 13:23:43.297270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.444 [2024-11-18 13:23:43.297440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.702 13:23:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.702 13:23:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:13.702 13:23:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:13.702 13:23:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.702 13:23:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.271 malloc0 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.271 [2024-11-18 13:23:44.234850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:14.271 [2024-11-18 13:23:44.235001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.271 [2024-11-18 13:23:44.235055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:14.271 [2024-11-18 13:23:44.235099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.271 [2024-11-18 13:23:44.237886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.271 [2024-11-18 13:23:44.237982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:14.271 pt0 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.271 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 d4d96168-604a-4916-9747-08e5377e24ca 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 afbe405d-a081-40f8-bede-4a544db015bf 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 c912782d-e00a-455f-8b29-7d752355e2db 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 [2024-11-18 13:23:44.450018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev afbe405d-a081-40f8-bede-4a544db015bf is claimed 00:07:14.530 [2024-11-18 13:23:44.450234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c912782d-e00a-455f-8b29-7d752355e2db is claimed 00:07:14.530 [2024-11-18 13:23:44.450412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:14.530 [2024-11-18 13:23:44.450431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:14.530 [2024-11-18 13:23:44.450800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:14.530 [2024-11-18 13:23:44.451056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:14.530 [2024-11-18 13:23:44.451067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:14.530 [2024-11-18 13:23:44.451304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.530 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.531 [2024-11-18 13:23:44.562038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.609963] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.791 [2024-11-18 13:23:44.610057] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'afbe405d-a081-40f8-bede-4a544db015bf' was resized: old size 131072, new size 204800 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.621809] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.791 [2024-11-18 13:23:44.621840] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c912782d-e00a-455f-8b29-7d752355e2db' was resized: old size 131072, new size 204800 00:07:14.791 [2024-11-18 13:23:44.621870] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.737698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.785409] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:14.791 [2024-11-18 13:23:44.785510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:14.791 [2024-11-18 13:23:44.785523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.791 [2024-11-18 13:23:44.785543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:14.791 [2024-11-18 13:23:44.785684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.791 [2024-11-18 13:23:44.785722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.791 [2024-11-18 13:23:44.785735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.797251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:14.791 [2024-11-18 13:23:44.797352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.791 [2024-11-18 13:23:44.797377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:14.791 [2024-11-18 13:23:44.797389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.791 [2024-11-18 13:23:44.800060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.791 [2024-11-18 13:23:44.800166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:14.791 [2024-11-18 13:23:44.802026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev afbe405d-a081-40f8-bede-4a544db015bf 00:07:14.791 [2024-11-18 13:23:44.802117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev afbe405d-a081-40f8-bede-4a544db015bf is claimed 00:07:14.791 [2024-11-18 13:23:44.802265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c912782d-e00a-455f-8b29-7d752355e2db 00:07:14.791 [2024-11-18 13:23:44.802288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c912782d-e00a-455f-8b29-7d752355e2db is claimed 00:07:14.791 [2024-11-18 13:23:44.802481] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c912782d-e00a-455f-8b29-7d752355e2db (2) smaller than existing raid bdev Raid (3) 00:07:14.791 [2024-11-18 13:23:44.802508] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev afbe405d-a081-40f8-bede-4a544db015bf: File exists 00:07:14.791 [2024-11-18 13:23:44.802544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:14.791 [2024-11-18 13:23:44.802557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:14.791 [2024-11-18 13:23:44.802822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:14.791 pt0 00:07:14.791 [2024-11-18 13:23:44.802979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:14.791 [2024-11-18 13:23:44.802988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:14.791 [2024-11-18 13:23:44.803153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.791 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.791 [2024-11-18 13:23:44.826373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60112 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60112 ']' 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60112 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60112 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60112' 00:07:15.051 killing process with pid 60112 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60112 00:07:15.051 [2024-11-18 13:23:44.913425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.051 13:23:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60112 00:07:15.052 [2024-11-18 13:23:44.913588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.052 [2024-11-18 13:23:44.913658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.052 [2024-11-18 13:23:44.913668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:16.960 [2024-11-18 13:23:46.535200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.899 13:23:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:17.899 00:07:17.899 real 0m5.143s 00:07:17.899 user 0m5.232s 00:07:17.899 sys 0m0.749s 00:07:17.899 ************************************ 00:07:17.899 END TEST raid0_resize_superblock_test 00:07:17.899 ************************************ 00:07:17.899 13:23:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.899 13:23:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.899 13:23:47 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:17.899 13:23:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.899 13:23:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.899 13:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.899 ************************************ 00:07:17.899 START TEST raid1_resize_superblock_test 00:07:17.899 ************************************ 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:17.899 Process raid pid: 60222 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60222 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60222' 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60222 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.899 13:23:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.158 [2024-11-18 13:23:47.960458] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:18.158 [2024-11-18 13:23:47.960742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.158 [2024-11-18 13:23:48.143882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.417 [2024-11-18 13:23:48.286235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.676 [2024-11-18 13:23:48.532780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.676 [2024-11-18 13:23:48.532826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.936 13:23:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.936 13:23:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.936 13:23:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:18.936 13:23:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.936 13:23:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.504 malloc0 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.504 [2024-11-18 13:23:49.463919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:19.504 [2024-11-18 13:23:49.463995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.504 [2024-11-18 13:23:49.464022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:19.504 [2024-11-18 13:23:49.464034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.504 [2024-11-18 13:23:49.466558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.504 [2024-11-18 13:23:49.466602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:19.504 pt0 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.504 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:19.505 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.505 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.764 98ecdf4e-0560-4565-9906-dc9fbb527d62 00:07:19.764 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.764 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 76b19263-d56e-43b2-beb0-706cb49b2a58 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 832cdc74-68f8-44de-9bb7-e48e1a4e1871 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 [2024-11-18 13:23:49.675929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 76b19263-d56e-43b2-beb0-706cb49b2a58 is claimed 00:07:19.765 [2024-11-18 13:23:49.676105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 832cdc74-68f8-44de-9bb7-e48e1a4e1871 is claimed 00:07:19.765 [2024-11-18 13:23:49.676268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.765 [2024-11-18 13:23:49.676288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:19.765 [2024-11-18 13:23:49.676580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.765 [2024-11-18 13:23:49.676789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.765 [2024-11-18 13:23:49.676799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:19.765 [2024-11-18 13:23:49.676970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 [2024-11-18 13:23:49.772079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.765 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.765 [2024-11-18 13:23:49.811996] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:19.765 [2024-11-18 13:23:49.812086] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '76b19263-d56e-43b2-beb0-706cb49b2a58' was resized: old size 131072, new size 204800 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 [2024-11-18 13:23:49.823900] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.025 [2024-11-18 13:23:49.823934] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '832cdc74-68f8-44de-9bb7-e48e1a4e1871' was resized: old size 131072, new size 204800 00:07:20.025 [2024-11-18 13:23:49.823973] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 [2024-11-18 13:23:49.931755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 [2024-11-18 13:23:49.975436] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:20.025 [2024-11-18 13:23:49.975562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:20.025 [2024-11-18 13:23:49.975608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:20.025 [2024-11-18 13:23:49.975815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.025 [2024-11-18 13:23:49.976085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.025 [2024-11-18 13:23:49.976210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.025 [2024-11-18 13:23:49.976266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.025 [2024-11-18 13:23:49.987335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:20.025 [2024-11-18 13:23:49.987453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.025 [2024-11-18 13:23:49.987503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:20.025 [2024-11-18 13:23:49.987547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.025 [2024-11-18 13:23:49.990544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.025 [2024-11-18 13:23:49.990629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:20.025 pt0 00:07:20.025 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.026 [2024-11-18 13:23:49.992696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 76b19263-d56e-43b2-beb0-706cb49b2a58 00:07:20.026 [2024-11-18 13:23:49.992772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 76b19263-d56e-43b2-beb0-706cb49b2a58 is claimed 00:07:20.026 [2024-11-18 13:23:49.992901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 832cdc74-68f8-44de-9bb7-e48e1a4e1871 00:07:20.026 [2024-11-18 13:23:49.992997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 832cdc74-68f8-44de-9bb7-e48e1a4e1871 is claimed 00:07:20.026 [2024-11-18 13:23:49.993203] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 832cdc74-68f8-44de-9bb7-e48e1a4e1871 (2) smaller than existing raid bdev Raid (3) 00:07:20.026 [2024-11-18 13:23:49.993279] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 76b19263-d56e-43b2-beb0-70 13:23:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:20.026 6cb49b2a58: File exists 00:07:20.026 [2024-11-18 13:23:49.993396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:20.026 [2024-11-18 13:23:49.993435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:20.026 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.026 [2024-11-18 13:23:49.993748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:20.026 [2024-11-18 13:23:49.993943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:20.026 [2024-11-18 13:23:49.993997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:20.026 13:23:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.026 [2024-11-18 13:23:49.994235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.026 [2024-11-18 13:23:50.015587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60222 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60222 ']' 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60222 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.026 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60222 00:07:20.285 killing process with pid 60222 00:07:20.285 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.285 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.285 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60222' 00:07:20.285 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60222 00:07:20.285 [2024-11-18 13:23:50.101729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.285 [2024-11-18 13:23:50.101852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.285 13:23:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60222 00:07:20.285 [2024-11-18 13:23:50.101919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.285 [2024-11-18 13:23:50.101929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:22.241 [2024-11-18 13:23:51.775590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.180 13:23:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:23.180 00:07:23.180 real 0m5.191s 00:07:23.180 user 0m5.193s 00:07:23.180 sys 0m0.804s 00:07:23.180 13:23:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.180 ************************************ 00:07:23.180 END TEST raid1_resize_superblock_test 00:07:23.180 ************************************ 00:07:23.180 13:23:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:23.180 13:23:53 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:23.180 13:23:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.180 13:23:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.180 13:23:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.180 ************************************ 00:07:23.180 START TEST raid_function_test_raid0 00:07:23.180 ************************************ 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60330 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60330' 00:07:23.180 Process raid pid: 60330 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60330 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60330 ']' 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.180 13:23:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.440 [2024-11-18 13:23:53.238912] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:23.440 [2024-11-18 13:23:53.239197] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.440 [2024-11-18 13:23:53.411246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.699 [2024-11-18 13:23:53.558852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.959 [2024-11-18 13:23:53.813561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.959 [2024-11-18 13:23:53.813683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.223 Base_1 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.223 Base_2 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.223 [2024-11-18 13:23:54.199807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:24.223 [2024-11-18 13:23:54.202261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:24.223 [2024-11-18 13:23:54.202358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.223 [2024-11-18 13:23:54.202373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.223 [2024-11-18 13:23:54.202720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.223 [2024-11-18 13:23:54.202916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.223 [2024-11-18 13:23:54.202927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:24.223 [2024-11-18 13:23:54.203165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.223 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:24.224 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:24.483 [2024-11-18 13:23:54.471414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:24.483 /dev/nbd0 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.483 1+0 records in 00:07:24.483 1+0 records out 00:07:24.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612736 s, 6.7 MB/s 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.483 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.742 { 00:07:24.742 "nbd_device": "/dev/nbd0", 00:07:24.742 "bdev_name": "raid" 00:07:24.742 } 00:07:24.742 ]' 00:07:24.742 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.742 { 00:07:24.743 "nbd_device": "/dev/nbd0", 00:07:24.743 "bdev_name": "raid" 00:07:24.743 } 00:07:24.743 ]' 00:07:24.743 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:25.001 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:25.002 4096+0 records in 00:07:25.002 4096+0 records out 00:07:25.002 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341501 s, 61.4 MB/s 00:07:25.002 13:23:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:25.262 4096+0 records in 00:07:25.262 4096+0 records out 00:07:25.262 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.240614 s, 8.7 MB/s 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:25.262 128+0 records in 00:07:25.262 128+0 records out 00:07:25.262 65536 bytes (66 kB, 64 KiB) copied, 0.00132479 s, 49.5 MB/s 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:25.262 2035+0 records in 00:07:25.262 2035+0 records out 00:07:25.262 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0166212 s, 62.7 MB/s 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:25.262 456+0 records in 00:07:25.262 456+0 records out 00:07:25.262 233472 bytes (233 kB, 228 KiB) copied, 0.00424229 s, 55.0 MB/s 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.262 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.521 [2024-11-18 13:23:55.480441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:25.521 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60330 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60330 ']' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60330 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60330 00:07:25.781 killing process with pid 60330 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60330' 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60330 00:07:25.781 [2024-11-18 13:23:55.822563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.781 [2024-11-18 13:23:55.822695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.781 13:23:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60330 00:07:25.781 [2024-11-18 13:23:55.822755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.781 [2024-11-18 13:23:55.822772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:26.040 [2024-11-18 13:23:56.062606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.419 ************************************ 00:07:27.419 END TEST raid_function_test_raid0 00:07:27.419 ************************************ 00:07:27.419 13:23:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:27.419 00:07:27.419 real 0m4.175s 00:07:27.419 user 0m4.753s 00:07:27.419 sys 0m1.103s 00:07:27.419 13:23:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.419 13:23:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:27.419 13:23:57 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:27.419 13:23:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.419 13:23:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.419 13:23:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.419 ************************************ 00:07:27.419 START TEST raid_function_test_concat 00:07:27.419 ************************************ 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:27.419 Process raid pid: 60459 00:07:27.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60459 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60459' 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60459 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60459 ']' 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.419 13:23:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.694 [2024-11-18 13:23:57.488731] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:27.694 [2024-11-18 13:23:57.489074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.694 [2024-11-18 13:23:57.662904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.972 [2024-11-18 13:23:57.825550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.232 [2024-11-18 13:23:58.116958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.232 [2024-11-18 13:23:58.117089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 Base_1 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 Base_2 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 [2024-11-18 13:23:58.427360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:28.491 [2024-11-18 13:23:58.429922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:28.491 [2024-11-18 13:23:58.430078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:28.491 [2024-11-18 13:23:58.430150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:28.491 [2024-11-18 13:23:58.430525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:28.491 [2024-11-18 13:23:58.430818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:28.491 [2024-11-18 13:23:58.430873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:28.491 [2024-11-18 13:23:58.431147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:28.491 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:28.751 [2024-11-18 13:23:58.671359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:28.751 /dev/nbd0 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.751 1+0 records in 00:07:28.751 1+0 records out 00:07:28.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277635 s, 14.8 MB/s 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.751 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:29.010 { 00:07:29.010 "nbd_device": "/dev/nbd0", 00:07:29.010 "bdev_name": "raid" 00:07:29.010 } 00:07:29.010 ]' 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:29.010 { 00:07:29.010 "nbd_device": "/dev/nbd0", 00:07:29.010 "bdev_name": "raid" 00:07:29.010 } 00:07:29.010 ]' 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:29.010 13:23:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:29.010 4096+0 records in 00:07:29.010 4096+0 records out 00:07:29.010 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0352584 s, 59.5 MB/s 00:07:29.010 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:29.270 4096+0 records in 00:07:29.270 4096+0 records out 00:07:29.270 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.253604 s, 8.3 MB/s 00:07:29.270 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:29.270 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:29.530 128+0 records in 00:07:29.530 128+0 records out 00:07:29.530 65536 bytes (66 kB, 64 KiB) copied, 0.0011425 s, 57.4 MB/s 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:29.530 2035+0 records in 00:07:29.530 2035+0 records out 00:07:29.530 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142866 s, 72.9 MB/s 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:29.530 456+0 records in 00:07:29.530 456+0 records out 00:07:29.530 233472 bytes (233 kB, 228 KiB) copied, 0.00374234 s, 62.4 MB/s 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:29.530 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.531 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:29.531 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.531 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:29.790 [2024-11-18 13:23:59.695451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:29.790 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:30.049 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60459 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60459 ']' 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60459 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.050 13:23:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60459 00:07:30.050 killing process with pid 60459 00:07:30.050 13:24:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.050 13:24:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.050 13:24:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60459' 00:07:30.050 13:24:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60459 00:07:30.050 [2024-11-18 13:24:00.016357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.050 13:24:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60459 00:07:30.050 [2024-11-18 13:24:00.016503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.050 [2024-11-18 13:24:00.016573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.050 [2024-11-18 13:24:00.016589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:30.308 [2024-11-18 13:24:00.288821] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.686 13:24:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:31.686 00:07:31.686 real 0m4.333s 00:07:31.686 user 0m4.886s 00:07:31.686 sys 0m1.069s 00:07:31.686 13:24:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.686 ************************************ 00:07:31.686 END TEST raid_function_test_concat 00:07:31.687 ************************************ 00:07:31.687 13:24:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:31.946 13:24:01 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:31.946 13:24:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.946 13:24:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.946 13:24:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.946 ************************************ 00:07:31.946 START TEST raid0_resize_test 00:07:31.947 ************************************ 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60588 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60588' 00:07:31.947 Process raid pid: 60588 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60588 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60588 ']' 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.947 13:24:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.947 [2024-11-18 13:24:01.863603] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:31.947 [2024-11-18 13:24:01.863805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.206 [2024-11-18 13:24:02.038011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.206 [2024-11-18 13:24:02.195098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.466 [2024-11-18 13:24:02.473892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.466 [2024-11-18 13:24:02.473953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.725 Base_1 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.725 Base_2 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.725 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 [2024-11-18 13:24:02.778506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:32.985 [2024-11-18 13:24:02.780913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:32.985 [2024-11-18 13:24:02.780982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.985 [2024-11-18 13:24:02.780996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:32.985 [2024-11-18 13:24:02.781343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.985 [2024-11-18 13:24:02.781514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.985 [2024-11-18 13:24:02.781527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:32.985 [2024-11-18 13:24:02.781739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 [2024-11-18 13:24:02.790454] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.985 [2024-11-18 13:24:02.790551] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:32.985 true 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 [2024-11-18 13:24:02.806633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 [2024-11-18 13:24:02.850493] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.985 [2024-11-18 13:24:02.850675] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:32.985 [2024-11-18 13:24:02.850787] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:32.985 true 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 [2024-11-18 13:24:02.866685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60588 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60588 ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60588 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60588 00:07:32.985 killing process with pid 60588 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60588' 00:07:32.985 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60588 00:07:32.985 [2024-11-18 13:24:02.939075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.985 [2024-11-18 13:24:02.939227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.986 13:24:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60588 00:07:32.986 [2024-11-18 13:24:02.939297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.986 [2024-11-18 13:24:02.939311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:32.986 [2024-11-18 13:24:02.962056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.366 13:24:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:34.366 00:07:34.366 real 0m2.612s 00:07:34.366 user 0m2.685s 00:07:34.366 sys 0m0.444s 00:07:34.366 13:24:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.366 13:24:04 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 ************************************ 00:07:34.366 END TEST raid0_resize_test 00:07:34.366 ************************************ 00:07:34.625 13:24:04 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:34.625 13:24:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:34.625 13:24:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.625 13:24:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.626 START TEST raid1_resize_test 00:07:34.626 ************************************ 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60650 00:07:34.626 Process raid pid: 60650 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60650' 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60650 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60650 ']' 00:07:34.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.626 13:24:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.626 [2024-11-18 13:24:04.537214] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:34.626 [2024-11-18 13:24:04.537324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.886 [2024-11-18 13:24:04.715663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.886 [2024-11-18 13:24:04.882033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.145 [2024-11-18 13:24:05.171054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.145 [2024-11-18 13:24:05.171115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.404 Base_1 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.404 Base_2 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:35.404 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.405 [2024-11-18 13:24:05.401340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:35.405 [2024-11-18 13:24:05.404266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:35.405 [2024-11-18 13:24:05.404375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:35.405 [2024-11-18 13:24:05.404398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:35.405 [2024-11-18 13:24:05.404802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.405 [2024-11-18 13:24:05.405026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:35.405 [2024-11-18 13:24:05.405048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:35.405 [2024-11-18 13:24:05.405369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.405 [2024-11-18 13:24:05.413404] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:35.405 [2024-11-18 13:24:05.413451] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:35.405 true 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.405 [2024-11-18 13:24:05.429652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.405 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.664 [2024-11-18 13:24:05.473320] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:35.664 [2024-11-18 13:24:05.473366] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:35.664 [2024-11-18 13:24:05.473409] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:35.664 true 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.664 [2024-11-18 13:24:05.489553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:35.664 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60650 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60650 ']' 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60650 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60650 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60650' 00:07:35.665 killing process with pid 60650 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60650 00:07:35.665 [2024-11-18 13:24:05.569421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.665 13:24:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60650 00:07:35.665 [2024-11-18 13:24:05.569697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.665 [2024-11-18 13:24:05.570592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.665 [2024-11-18 13:24:05.570720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:35.665 [2024-11-18 13:24:05.591171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.055 13:24:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:37.055 00:07:37.055 real 0m2.569s 00:07:37.055 user 0m2.603s 00:07:37.055 sys 0m0.440s 00:07:37.055 13:24:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.055 13:24:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.055 ************************************ 00:07:37.055 END TEST raid1_resize_test 00:07:37.055 ************************************ 00:07:37.055 13:24:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:37.055 13:24:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:37.055 13:24:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:37.055 13:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:37.055 13:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.055 13:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.055 ************************************ 00:07:37.055 START TEST raid_state_function_test 00:07:37.055 ************************************ 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:37.055 Process raid pid: 60712 00:07:37.055 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60712 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60712' 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60712 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60712 ']' 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.056 13:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.315 [2024-11-18 13:24:07.182447] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:37.315 [2024-11-18 13:24:07.182649] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.315 [2024-11-18 13:24:07.344504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.574 [2024-11-18 13:24:07.505085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.833 [2024-11-18 13:24:07.760701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.833 [2024-11-18 13:24:07.760863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.092 [2024-11-18 13:24:08.123558] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.092 [2024-11-18 13:24:08.123753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.092 [2024-11-18 13:24:08.123770] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.092 [2024-11-18 13:24:08.123782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.092 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.093 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.352 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.352 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.352 "name": "Existed_Raid", 00:07:38.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.352 "strip_size_kb": 64, 00:07:38.352 "state": "configuring", 00:07:38.352 "raid_level": "raid0", 00:07:38.352 "superblock": false, 00:07:38.352 "num_base_bdevs": 2, 00:07:38.352 "num_base_bdevs_discovered": 0, 00:07:38.352 "num_base_bdevs_operational": 2, 00:07:38.352 "base_bdevs_list": [ 00:07:38.352 { 00:07:38.352 "name": "BaseBdev1", 00:07:38.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.352 "is_configured": false, 00:07:38.352 "data_offset": 0, 00:07:38.352 "data_size": 0 00:07:38.352 }, 00:07:38.352 { 00:07:38.352 "name": "BaseBdev2", 00:07:38.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.352 "is_configured": false, 00:07:38.352 "data_offset": 0, 00:07:38.352 "data_size": 0 00:07:38.352 } 00:07:38.352 ] 00:07:38.352 }' 00:07:38.352 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.352 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.612 [2024-11-18 13:24:08.586675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.612 [2024-11-18 13:24:08.586812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.612 [2024-11-18 13:24:08.598616] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:38.612 [2024-11-18 13:24:08.598709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:38.612 [2024-11-18 13:24:08.598743] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.612 [2024-11-18 13:24:08.598772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.612 [2024-11-18 13:24:08.653956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.612 BaseBdev1 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.612 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.873 [ 00:07:38.873 { 00:07:38.873 "name": "BaseBdev1", 00:07:38.873 "aliases": [ 00:07:38.873 "ac80a563-a288-4d62-aa77-f85639299568" 00:07:38.873 ], 00:07:38.873 "product_name": "Malloc disk", 00:07:38.873 "block_size": 512, 00:07:38.873 "num_blocks": 65536, 00:07:38.873 "uuid": "ac80a563-a288-4d62-aa77-f85639299568", 00:07:38.873 "assigned_rate_limits": { 00:07:38.873 "rw_ios_per_sec": 0, 00:07:38.873 "rw_mbytes_per_sec": 0, 00:07:38.873 "r_mbytes_per_sec": 0, 00:07:38.873 "w_mbytes_per_sec": 0 00:07:38.873 }, 00:07:38.873 "claimed": true, 00:07:38.873 "claim_type": "exclusive_write", 00:07:38.873 "zoned": false, 00:07:38.873 "supported_io_types": { 00:07:38.873 "read": true, 00:07:38.873 "write": true, 00:07:38.873 "unmap": true, 00:07:38.873 "flush": true, 00:07:38.873 "reset": true, 00:07:38.873 "nvme_admin": false, 00:07:38.873 "nvme_io": false, 00:07:38.873 "nvme_io_md": false, 00:07:38.873 "write_zeroes": true, 00:07:38.873 "zcopy": true, 00:07:38.873 "get_zone_info": false, 00:07:38.873 "zone_management": false, 00:07:38.873 "zone_append": false, 00:07:38.873 "compare": false, 00:07:38.873 "compare_and_write": false, 00:07:38.873 "abort": true, 00:07:38.873 "seek_hole": false, 00:07:38.873 "seek_data": false, 00:07:38.873 "copy": true, 00:07:38.873 "nvme_iov_md": false 00:07:38.873 }, 00:07:38.873 "memory_domains": [ 00:07:38.873 { 00:07:38.873 "dma_device_id": "system", 00:07:38.873 "dma_device_type": 1 00:07:38.873 }, 00:07:38.873 { 00:07:38.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.873 "dma_device_type": 2 00:07:38.873 } 00:07:38.873 ], 00:07:38.873 "driver_specific": {} 00:07:38.873 } 00:07:38.873 ] 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.873 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.874 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.874 "name": "Existed_Raid", 00:07:38.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.874 "strip_size_kb": 64, 00:07:38.874 "state": "configuring", 00:07:38.874 "raid_level": "raid0", 00:07:38.874 "superblock": false, 00:07:38.874 "num_base_bdevs": 2, 00:07:38.874 "num_base_bdevs_discovered": 1, 00:07:38.874 "num_base_bdevs_operational": 2, 00:07:38.874 "base_bdevs_list": [ 00:07:38.874 { 00:07:38.874 "name": "BaseBdev1", 00:07:38.874 "uuid": "ac80a563-a288-4d62-aa77-f85639299568", 00:07:38.874 "is_configured": true, 00:07:38.874 "data_offset": 0, 00:07:38.874 "data_size": 65536 00:07:38.874 }, 00:07:38.874 { 00:07:38.874 "name": "BaseBdev2", 00:07:38.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.874 "is_configured": false, 00:07:38.874 "data_offset": 0, 00:07:38.874 "data_size": 0 00:07:38.874 } 00:07:38.874 ] 00:07:38.874 }' 00:07:38.874 13:24:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.874 13:24:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.134 [2024-11-18 13:24:09.169217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:39.134 [2024-11-18 13:24:09.169303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.134 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.135 [2024-11-18 13:24:09.181295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.135 [2024-11-18 13:24:09.183495] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:39.135 [2024-11-18 13:24:09.183642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.393 "name": "Existed_Raid", 00:07:39.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.393 "strip_size_kb": 64, 00:07:39.393 "state": "configuring", 00:07:39.393 "raid_level": "raid0", 00:07:39.393 "superblock": false, 00:07:39.393 "num_base_bdevs": 2, 00:07:39.393 "num_base_bdevs_discovered": 1, 00:07:39.393 "num_base_bdevs_operational": 2, 00:07:39.393 "base_bdevs_list": [ 00:07:39.393 { 00:07:39.393 "name": "BaseBdev1", 00:07:39.393 "uuid": "ac80a563-a288-4d62-aa77-f85639299568", 00:07:39.393 "is_configured": true, 00:07:39.393 "data_offset": 0, 00:07:39.393 "data_size": 65536 00:07:39.393 }, 00:07:39.393 { 00:07:39.393 "name": "BaseBdev2", 00:07:39.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.393 "is_configured": false, 00:07:39.393 "data_offset": 0, 00:07:39.393 "data_size": 0 00:07:39.393 } 00:07:39.393 ] 00:07:39.393 }' 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.393 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.653 [2024-11-18 13:24:09.698727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.653 [2024-11-18 13:24:09.698907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.653 [2024-11-18 13:24:09.698940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:39.653 [2024-11-18 13:24:09.699339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.653 [2024-11-18 13:24:09.699601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.653 [2024-11-18 13:24:09.699661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:39.653 [2024-11-18 13:24:09.700065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.653 BaseBdev2 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:39.653 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.913 [ 00:07:39.913 { 00:07:39.913 "name": "BaseBdev2", 00:07:39.913 "aliases": [ 00:07:39.913 "b8485b82-60d8-4d10-8574-c97545518adb" 00:07:39.913 ], 00:07:39.913 "product_name": "Malloc disk", 00:07:39.913 "block_size": 512, 00:07:39.913 "num_blocks": 65536, 00:07:39.913 "uuid": "b8485b82-60d8-4d10-8574-c97545518adb", 00:07:39.913 "assigned_rate_limits": { 00:07:39.913 "rw_ios_per_sec": 0, 00:07:39.913 "rw_mbytes_per_sec": 0, 00:07:39.913 "r_mbytes_per_sec": 0, 00:07:39.913 "w_mbytes_per_sec": 0 00:07:39.913 }, 00:07:39.913 "claimed": true, 00:07:39.913 "claim_type": "exclusive_write", 00:07:39.913 "zoned": false, 00:07:39.913 "supported_io_types": { 00:07:39.913 "read": true, 00:07:39.913 "write": true, 00:07:39.913 "unmap": true, 00:07:39.913 "flush": true, 00:07:39.913 "reset": true, 00:07:39.913 "nvme_admin": false, 00:07:39.913 "nvme_io": false, 00:07:39.913 "nvme_io_md": false, 00:07:39.913 "write_zeroes": true, 00:07:39.913 "zcopy": true, 00:07:39.913 "get_zone_info": false, 00:07:39.913 "zone_management": false, 00:07:39.913 "zone_append": false, 00:07:39.913 "compare": false, 00:07:39.913 "compare_and_write": false, 00:07:39.913 "abort": true, 00:07:39.913 "seek_hole": false, 00:07:39.913 "seek_data": false, 00:07:39.913 "copy": true, 00:07:39.913 "nvme_iov_md": false 00:07:39.913 }, 00:07:39.913 "memory_domains": [ 00:07:39.913 { 00:07:39.913 "dma_device_id": "system", 00:07:39.913 "dma_device_type": 1 00:07:39.913 }, 00:07:39.913 { 00:07:39.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.913 "dma_device_type": 2 00:07:39.913 } 00:07:39.913 ], 00:07:39.913 "driver_specific": {} 00:07:39.913 } 00:07:39.913 ] 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.913 "name": "Existed_Raid", 00:07:39.913 "uuid": "42516e46-3c90-4e01-bd53-b5d2b60b8ee2", 00:07:39.913 "strip_size_kb": 64, 00:07:39.913 "state": "online", 00:07:39.913 "raid_level": "raid0", 00:07:39.913 "superblock": false, 00:07:39.913 "num_base_bdevs": 2, 00:07:39.913 "num_base_bdevs_discovered": 2, 00:07:39.913 "num_base_bdevs_operational": 2, 00:07:39.913 "base_bdevs_list": [ 00:07:39.913 { 00:07:39.913 "name": "BaseBdev1", 00:07:39.913 "uuid": "ac80a563-a288-4d62-aa77-f85639299568", 00:07:39.913 "is_configured": true, 00:07:39.913 "data_offset": 0, 00:07:39.913 "data_size": 65536 00:07:39.913 }, 00:07:39.913 { 00:07:39.913 "name": "BaseBdev2", 00:07:39.913 "uuid": "b8485b82-60d8-4d10-8574-c97545518adb", 00:07:39.913 "is_configured": true, 00:07:39.913 "data_offset": 0, 00:07:39.913 "data_size": 65536 00:07:39.913 } 00:07:39.913 ] 00:07:39.913 }' 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.913 13:24:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.174 [2024-11-18 13:24:10.178503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.174 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.174 "name": "Existed_Raid", 00:07:40.174 "aliases": [ 00:07:40.174 "42516e46-3c90-4e01-bd53-b5d2b60b8ee2" 00:07:40.174 ], 00:07:40.174 "product_name": "Raid Volume", 00:07:40.174 "block_size": 512, 00:07:40.174 "num_blocks": 131072, 00:07:40.174 "uuid": "42516e46-3c90-4e01-bd53-b5d2b60b8ee2", 00:07:40.174 "assigned_rate_limits": { 00:07:40.174 "rw_ios_per_sec": 0, 00:07:40.174 "rw_mbytes_per_sec": 0, 00:07:40.174 "r_mbytes_per_sec": 0, 00:07:40.174 "w_mbytes_per_sec": 0 00:07:40.174 }, 00:07:40.174 "claimed": false, 00:07:40.174 "zoned": false, 00:07:40.174 "supported_io_types": { 00:07:40.174 "read": true, 00:07:40.174 "write": true, 00:07:40.174 "unmap": true, 00:07:40.174 "flush": true, 00:07:40.174 "reset": true, 00:07:40.174 "nvme_admin": false, 00:07:40.174 "nvme_io": false, 00:07:40.174 "nvme_io_md": false, 00:07:40.174 "write_zeroes": true, 00:07:40.174 "zcopy": false, 00:07:40.174 "get_zone_info": false, 00:07:40.174 "zone_management": false, 00:07:40.174 "zone_append": false, 00:07:40.174 "compare": false, 00:07:40.174 "compare_and_write": false, 00:07:40.174 "abort": false, 00:07:40.174 "seek_hole": false, 00:07:40.174 "seek_data": false, 00:07:40.174 "copy": false, 00:07:40.174 "nvme_iov_md": false 00:07:40.174 }, 00:07:40.174 "memory_domains": [ 00:07:40.174 { 00:07:40.174 "dma_device_id": "system", 00:07:40.174 "dma_device_type": 1 00:07:40.174 }, 00:07:40.174 { 00:07:40.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.175 "dma_device_type": 2 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "dma_device_id": "system", 00:07:40.175 "dma_device_type": 1 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.175 "dma_device_type": 2 00:07:40.175 } 00:07:40.175 ], 00:07:40.175 "driver_specific": { 00:07:40.175 "raid": { 00:07:40.175 "uuid": "42516e46-3c90-4e01-bd53-b5d2b60b8ee2", 00:07:40.175 "strip_size_kb": 64, 00:07:40.175 "state": "online", 00:07:40.175 "raid_level": "raid0", 00:07:40.175 "superblock": false, 00:07:40.175 "num_base_bdevs": 2, 00:07:40.175 "num_base_bdevs_discovered": 2, 00:07:40.175 "num_base_bdevs_operational": 2, 00:07:40.175 "base_bdevs_list": [ 00:07:40.175 { 00:07:40.175 "name": "BaseBdev1", 00:07:40.175 "uuid": "ac80a563-a288-4d62-aa77-f85639299568", 00:07:40.175 "is_configured": true, 00:07:40.175 "data_offset": 0, 00:07:40.175 "data_size": 65536 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "name": "BaseBdev2", 00:07:40.175 "uuid": "b8485b82-60d8-4d10-8574-c97545518adb", 00:07:40.175 "is_configured": true, 00:07:40.175 "data_offset": 0, 00:07:40.175 "data_size": 65536 00:07:40.175 } 00:07:40.175 ] 00:07:40.175 } 00:07:40.175 } 00:07:40.175 }' 00:07:40.175 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:40.503 BaseBdev2' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.503 [2024-11-18 13:24:10.401716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:40.503 [2024-11-18 13:24:10.401771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.503 [2024-11-18 13:24:10.401834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.503 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.504 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.764 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.764 "name": "Existed_Raid", 00:07:40.764 "uuid": "42516e46-3c90-4e01-bd53-b5d2b60b8ee2", 00:07:40.764 "strip_size_kb": 64, 00:07:40.764 "state": "offline", 00:07:40.764 "raid_level": "raid0", 00:07:40.764 "superblock": false, 00:07:40.764 "num_base_bdevs": 2, 00:07:40.764 "num_base_bdevs_discovered": 1, 00:07:40.764 "num_base_bdevs_operational": 1, 00:07:40.764 "base_bdevs_list": [ 00:07:40.764 { 00:07:40.764 "name": null, 00:07:40.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.764 "is_configured": false, 00:07:40.764 "data_offset": 0, 00:07:40.764 "data_size": 65536 00:07:40.764 }, 00:07:40.764 { 00:07:40.764 "name": "BaseBdev2", 00:07:40.764 "uuid": "b8485b82-60d8-4d10-8574-c97545518adb", 00:07:40.764 "is_configured": true, 00:07:40.764 "data_offset": 0, 00:07:40.764 "data_size": 65536 00:07:40.764 } 00:07:40.764 ] 00:07:40.764 }' 00:07:40.764 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.764 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.024 13:24:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.024 [2024-11-18 13:24:10.998420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:41.024 [2024-11-18 13:24:10.998530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60712 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60712 ']' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60712 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60712 00:07:41.283 killing process with pid 60712 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.283 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60712' 00:07:41.284 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60712 00:07:41.284 [2024-11-18 13:24:11.201517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.284 13:24:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60712 00:07:41.284 [2024-11-18 13:24:11.221768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:42.665 00:07:42.665 real 0m5.504s 00:07:42.665 user 0m7.724s 00:07:42.665 sys 0m0.936s 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.665 ************************************ 00:07:42.665 END TEST raid_state_function_test 00:07:42.665 ************************************ 00:07:42.665 13:24:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:42.665 13:24:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.665 13:24:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.665 13:24:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.665 ************************************ 00:07:42.665 START TEST raid_state_function_test_sb 00:07:42.665 ************************************ 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60971 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60971' 00:07:42.665 Process raid pid: 60971 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60971 00:07:42.665 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60971 ']' 00:07:42.666 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.666 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.666 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.666 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.666 13:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.925 [2024-11-18 13:24:12.760793] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:42.925 [2024-11-18 13:24:12.760934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.925 [2024-11-18 13:24:12.941922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.184 [2024-11-18 13:24:13.083860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.444 [2024-11-18 13:24:13.351755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.444 [2024-11-18 13:24:13.351816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.704 [2024-11-18 13:24:13.627469] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.704 [2024-11-18 13:24:13.627544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.704 [2024-11-18 13:24:13.627557] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.704 [2024-11-18 13:24:13.627568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.704 "name": "Existed_Raid", 00:07:43.704 "uuid": "61a60fa1-cf79-44b8-a0e0-8f9b415b0545", 00:07:43.704 "strip_size_kb": 64, 00:07:43.704 "state": "configuring", 00:07:43.704 "raid_level": "raid0", 00:07:43.704 "superblock": true, 00:07:43.704 "num_base_bdevs": 2, 00:07:43.704 "num_base_bdevs_discovered": 0, 00:07:43.704 "num_base_bdevs_operational": 2, 00:07:43.704 "base_bdevs_list": [ 00:07:43.704 { 00:07:43.704 "name": "BaseBdev1", 00:07:43.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.704 "is_configured": false, 00:07:43.704 "data_offset": 0, 00:07:43.704 "data_size": 0 00:07:43.704 }, 00:07:43.704 { 00:07:43.704 "name": "BaseBdev2", 00:07:43.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.704 "is_configured": false, 00:07:43.704 "data_offset": 0, 00:07:43.704 "data_size": 0 00:07:43.704 } 00:07:43.704 ] 00:07:43.704 }' 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.704 13:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 [2024-11-18 13:24:14.034767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.278 [2024-11-18 13:24:14.034827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 [2024-11-18 13:24:14.046737] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.278 [2024-11-18 13:24:14.046797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.278 [2024-11-18 13:24:14.046807] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.278 [2024-11-18 13:24:14.046821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 [2024-11-18 13:24:14.104055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.278 BaseBdev1 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.278 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.278 [ 00:07:44.278 { 00:07:44.278 "name": "BaseBdev1", 00:07:44.278 "aliases": [ 00:07:44.278 "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471" 00:07:44.278 ], 00:07:44.279 "product_name": "Malloc disk", 00:07:44.279 "block_size": 512, 00:07:44.279 "num_blocks": 65536, 00:07:44.279 "uuid": "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471", 00:07:44.279 "assigned_rate_limits": { 00:07:44.279 "rw_ios_per_sec": 0, 00:07:44.279 "rw_mbytes_per_sec": 0, 00:07:44.279 "r_mbytes_per_sec": 0, 00:07:44.279 "w_mbytes_per_sec": 0 00:07:44.279 }, 00:07:44.279 "claimed": true, 00:07:44.279 "claim_type": "exclusive_write", 00:07:44.279 "zoned": false, 00:07:44.279 "supported_io_types": { 00:07:44.279 "read": true, 00:07:44.279 "write": true, 00:07:44.279 "unmap": true, 00:07:44.279 "flush": true, 00:07:44.279 "reset": true, 00:07:44.279 "nvme_admin": false, 00:07:44.279 "nvme_io": false, 00:07:44.279 "nvme_io_md": false, 00:07:44.279 "write_zeroes": true, 00:07:44.279 "zcopy": true, 00:07:44.279 "get_zone_info": false, 00:07:44.279 "zone_management": false, 00:07:44.279 "zone_append": false, 00:07:44.279 "compare": false, 00:07:44.279 "compare_and_write": false, 00:07:44.279 "abort": true, 00:07:44.279 "seek_hole": false, 00:07:44.279 "seek_data": false, 00:07:44.279 "copy": true, 00:07:44.279 "nvme_iov_md": false 00:07:44.279 }, 00:07:44.279 "memory_domains": [ 00:07:44.279 { 00:07:44.279 "dma_device_id": "system", 00:07:44.279 "dma_device_type": 1 00:07:44.279 }, 00:07:44.279 { 00:07:44.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.279 "dma_device_type": 2 00:07:44.279 } 00:07:44.279 ], 00:07:44.279 "driver_specific": {} 00:07:44.279 } 00:07:44.279 ] 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.279 "name": "Existed_Raid", 00:07:44.279 "uuid": "863b2e82-77ad-451d-b30d-77800d6ad367", 00:07:44.279 "strip_size_kb": 64, 00:07:44.279 "state": "configuring", 00:07:44.279 "raid_level": "raid0", 00:07:44.279 "superblock": true, 00:07:44.279 "num_base_bdevs": 2, 00:07:44.279 "num_base_bdevs_discovered": 1, 00:07:44.279 "num_base_bdevs_operational": 2, 00:07:44.279 "base_bdevs_list": [ 00:07:44.279 { 00:07:44.279 "name": "BaseBdev1", 00:07:44.279 "uuid": "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471", 00:07:44.279 "is_configured": true, 00:07:44.279 "data_offset": 2048, 00:07:44.279 "data_size": 63488 00:07:44.279 }, 00:07:44.279 { 00:07:44.279 "name": "BaseBdev2", 00:07:44.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.279 "is_configured": false, 00:07:44.279 "data_offset": 0, 00:07:44.279 "data_size": 0 00:07:44.279 } 00:07:44.279 ] 00:07:44.279 }' 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.279 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.547 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.547 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.547 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.547 [2024-11-18 13:24:14.571347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.547 [2024-11-18 13:24:14.571431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:44.547 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.547 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.548 [2024-11-18 13:24:14.583399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.548 [2024-11-18 13:24:14.585965] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.548 [2024-11-18 13:24:14.586016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.548 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.807 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.807 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.807 "name": "Existed_Raid", 00:07:44.807 "uuid": "86a3c860-32cc-4219-a46a-ec05f0799cc4", 00:07:44.807 "strip_size_kb": 64, 00:07:44.807 "state": "configuring", 00:07:44.807 "raid_level": "raid0", 00:07:44.807 "superblock": true, 00:07:44.807 "num_base_bdevs": 2, 00:07:44.807 "num_base_bdevs_discovered": 1, 00:07:44.807 "num_base_bdevs_operational": 2, 00:07:44.807 "base_bdevs_list": [ 00:07:44.807 { 00:07:44.807 "name": "BaseBdev1", 00:07:44.807 "uuid": "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471", 00:07:44.807 "is_configured": true, 00:07:44.807 "data_offset": 2048, 00:07:44.807 "data_size": 63488 00:07:44.807 }, 00:07:44.807 { 00:07:44.807 "name": "BaseBdev2", 00:07:44.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.807 "is_configured": false, 00:07:44.807 "data_offset": 0, 00:07:44.807 "data_size": 0 00:07:44.807 } 00:07:44.807 ] 00:07:44.807 }' 00:07:44.807 13:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.807 13:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.066 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.066 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.066 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.326 [2024-11-18 13:24:15.120106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.326 [2024-11-18 13:24:15.120471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.326 [2024-11-18 13:24:15.120495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.326 [2024-11-18 13:24:15.120829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.326 [2024-11-18 13:24:15.121018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.326 [2024-11-18 13:24:15.121041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.326 BaseBdev2 00:07:45.326 [2024-11-18 13:24:15.121234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.326 [ 00:07:45.326 { 00:07:45.326 "name": "BaseBdev2", 00:07:45.326 "aliases": [ 00:07:45.326 "28b8455e-403f-4968-9af1-02e005a5ef6f" 00:07:45.326 ], 00:07:45.326 "product_name": "Malloc disk", 00:07:45.326 "block_size": 512, 00:07:45.326 "num_blocks": 65536, 00:07:45.326 "uuid": "28b8455e-403f-4968-9af1-02e005a5ef6f", 00:07:45.326 "assigned_rate_limits": { 00:07:45.326 "rw_ios_per_sec": 0, 00:07:45.326 "rw_mbytes_per_sec": 0, 00:07:45.326 "r_mbytes_per_sec": 0, 00:07:45.326 "w_mbytes_per_sec": 0 00:07:45.326 }, 00:07:45.326 "claimed": true, 00:07:45.326 "claim_type": "exclusive_write", 00:07:45.326 "zoned": false, 00:07:45.326 "supported_io_types": { 00:07:45.326 "read": true, 00:07:45.326 "write": true, 00:07:45.326 "unmap": true, 00:07:45.326 "flush": true, 00:07:45.326 "reset": true, 00:07:45.326 "nvme_admin": false, 00:07:45.326 "nvme_io": false, 00:07:45.326 "nvme_io_md": false, 00:07:45.326 "write_zeroes": true, 00:07:45.326 "zcopy": true, 00:07:45.326 "get_zone_info": false, 00:07:45.326 "zone_management": false, 00:07:45.326 "zone_append": false, 00:07:45.326 "compare": false, 00:07:45.326 "compare_and_write": false, 00:07:45.326 "abort": true, 00:07:45.326 "seek_hole": false, 00:07:45.326 "seek_data": false, 00:07:45.326 "copy": true, 00:07:45.326 "nvme_iov_md": false 00:07:45.326 }, 00:07:45.326 "memory_domains": [ 00:07:45.326 { 00:07:45.326 "dma_device_id": "system", 00:07:45.326 "dma_device_type": 1 00:07:45.326 }, 00:07:45.326 { 00:07:45.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.326 "dma_device_type": 2 00:07:45.326 } 00:07:45.326 ], 00:07:45.326 "driver_specific": {} 00:07:45.326 } 00:07:45.326 ] 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.326 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.327 "name": "Existed_Raid", 00:07:45.327 "uuid": "86a3c860-32cc-4219-a46a-ec05f0799cc4", 00:07:45.327 "strip_size_kb": 64, 00:07:45.327 "state": "online", 00:07:45.327 "raid_level": "raid0", 00:07:45.327 "superblock": true, 00:07:45.327 "num_base_bdevs": 2, 00:07:45.327 "num_base_bdevs_discovered": 2, 00:07:45.327 "num_base_bdevs_operational": 2, 00:07:45.327 "base_bdevs_list": [ 00:07:45.327 { 00:07:45.327 "name": "BaseBdev1", 00:07:45.327 "uuid": "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471", 00:07:45.327 "is_configured": true, 00:07:45.327 "data_offset": 2048, 00:07:45.327 "data_size": 63488 00:07:45.327 }, 00:07:45.327 { 00:07:45.327 "name": "BaseBdev2", 00:07:45.327 "uuid": "28b8455e-403f-4968-9af1-02e005a5ef6f", 00:07:45.327 "is_configured": true, 00:07:45.327 "data_offset": 2048, 00:07:45.327 "data_size": 63488 00:07:45.327 } 00:07:45.327 ] 00:07:45.327 }' 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.327 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.586 [2024-11-18 13:24:15.555778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.586 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.586 "name": "Existed_Raid", 00:07:45.586 "aliases": [ 00:07:45.586 "86a3c860-32cc-4219-a46a-ec05f0799cc4" 00:07:45.586 ], 00:07:45.586 "product_name": "Raid Volume", 00:07:45.586 "block_size": 512, 00:07:45.586 "num_blocks": 126976, 00:07:45.586 "uuid": "86a3c860-32cc-4219-a46a-ec05f0799cc4", 00:07:45.586 "assigned_rate_limits": { 00:07:45.586 "rw_ios_per_sec": 0, 00:07:45.586 "rw_mbytes_per_sec": 0, 00:07:45.586 "r_mbytes_per_sec": 0, 00:07:45.586 "w_mbytes_per_sec": 0 00:07:45.586 }, 00:07:45.586 "claimed": false, 00:07:45.587 "zoned": false, 00:07:45.587 "supported_io_types": { 00:07:45.587 "read": true, 00:07:45.587 "write": true, 00:07:45.587 "unmap": true, 00:07:45.587 "flush": true, 00:07:45.587 "reset": true, 00:07:45.587 "nvme_admin": false, 00:07:45.587 "nvme_io": false, 00:07:45.587 "nvme_io_md": false, 00:07:45.587 "write_zeroes": true, 00:07:45.587 "zcopy": false, 00:07:45.587 "get_zone_info": false, 00:07:45.587 "zone_management": false, 00:07:45.587 "zone_append": false, 00:07:45.587 "compare": false, 00:07:45.587 "compare_and_write": false, 00:07:45.587 "abort": false, 00:07:45.587 "seek_hole": false, 00:07:45.587 "seek_data": false, 00:07:45.587 "copy": false, 00:07:45.587 "nvme_iov_md": false 00:07:45.587 }, 00:07:45.587 "memory_domains": [ 00:07:45.587 { 00:07:45.587 "dma_device_id": "system", 00:07:45.587 "dma_device_type": 1 00:07:45.587 }, 00:07:45.587 { 00:07:45.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.587 "dma_device_type": 2 00:07:45.587 }, 00:07:45.587 { 00:07:45.587 "dma_device_id": "system", 00:07:45.587 "dma_device_type": 1 00:07:45.587 }, 00:07:45.587 { 00:07:45.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.587 "dma_device_type": 2 00:07:45.587 } 00:07:45.587 ], 00:07:45.587 "driver_specific": { 00:07:45.587 "raid": { 00:07:45.587 "uuid": "86a3c860-32cc-4219-a46a-ec05f0799cc4", 00:07:45.587 "strip_size_kb": 64, 00:07:45.587 "state": "online", 00:07:45.587 "raid_level": "raid0", 00:07:45.587 "superblock": true, 00:07:45.587 "num_base_bdevs": 2, 00:07:45.587 "num_base_bdevs_discovered": 2, 00:07:45.587 "num_base_bdevs_operational": 2, 00:07:45.587 "base_bdevs_list": [ 00:07:45.587 { 00:07:45.587 "name": "BaseBdev1", 00:07:45.587 "uuid": "2bb7d3a2-ce5f-4d6b-bd86-d35b6b648471", 00:07:45.587 "is_configured": true, 00:07:45.587 "data_offset": 2048, 00:07:45.587 "data_size": 63488 00:07:45.587 }, 00:07:45.587 { 00:07:45.587 "name": "BaseBdev2", 00:07:45.587 "uuid": "28b8455e-403f-4968-9af1-02e005a5ef6f", 00:07:45.587 "is_configured": true, 00:07:45.587 "data_offset": 2048, 00:07:45.587 "data_size": 63488 00:07:45.587 } 00:07:45.587 ] 00:07:45.587 } 00:07:45.587 } 00:07:45.587 }' 00:07:45.587 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.847 BaseBdev2' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.847 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.847 [2024-11-18 13:24:15.787093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.847 [2024-11-18 13:24:15.787162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.847 [2024-11-18 13:24:15.787229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.230 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.231 "name": "Existed_Raid", 00:07:46.231 "uuid": "86a3c860-32cc-4219-a46a-ec05f0799cc4", 00:07:46.231 "strip_size_kb": 64, 00:07:46.231 "state": "offline", 00:07:46.231 "raid_level": "raid0", 00:07:46.231 "superblock": true, 00:07:46.231 "num_base_bdevs": 2, 00:07:46.231 "num_base_bdevs_discovered": 1, 00:07:46.231 "num_base_bdevs_operational": 1, 00:07:46.231 "base_bdevs_list": [ 00:07:46.231 { 00:07:46.231 "name": null, 00:07:46.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.231 "is_configured": false, 00:07:46.231 "data_offset": 0, 00:07:46.231 "data_size": 63488 00:07:46.231 }, 00:07:46.231 { 00:07:46.231 "name": "BaseBdev2", 00:07:46.231 "uuid": "28b8455e-403f-4968-9af1-02e005a5ef6f", 00:07:46.231 "is_configured": true, 00:07:46.231 "data_offset": 2048, 00:07:46.231 "data_size": 63488 00:07:46.231 } 00:07:46.231 ] 00:07:46.231 }' 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.231 13:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.498 [2024-11-18 13:24:16.363402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.498 [2024-11-18 13:24:16.363483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60971 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60971 ']' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60971 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.498 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60971 00:07:46.759 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.759 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.759 killing process with pid 60971 00:07:46.759 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60971' 00:07:46.759 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60971 00:07:46.759 [2024-11-18 13:24:16.585411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.759 13:24:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60971 00:07:46.759 [2024-11-18 13:24:16.606137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.141 13:24:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.141 00:07:48.141 real 0m5.285s 00:07:48.141 user 0m7.364s 00:07:48.141 sys 0m0.951s 00:07:48.141 13:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.141 13:24:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.141 ************************************ 00:07:48.141 END TEST raid_state_function_test_sb 00:07:48.141 ************************************ 00:07:48.141 13:24:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:48.141 13:24:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:48.141 13:24:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.141 13:24:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.141 ************************************ 00:07:48.141 START TEST raid_superblock_test 00:07:48.141 ************************************ 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61223 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61223 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61223 ']' 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.141 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.141 [2024-11-18 13:24:18.098692] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:48.141 [2024-11-18 13:24:18.098838] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61223 ] 00:07:48.401 [2024-11-18 13:24:18.272697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.401 [2024-11-18 13:24:18.425683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.660 [2024-11-18 13:24:18.689304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.660 [2024-11-18 13:24:18.689359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.921 13:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.182 malloc1 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.182 [2024-11-18 13:24:19.026674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.182 [2024-11-18 13:24:19.026762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.182 [2024-11-18 13:24:19.026794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.182 [2024-11-18 13:24:19.026805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.182 [2024-11-18 13:24:19.029556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.182 [2024-11-18 13:24:19.029598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.182 pt1 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.182 malloc2 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.182 [2024-11-18 13:24:19.089869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.182 [2024-11-18 13:24:19.089947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.182 [2024-11-18 13:24:19.089977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:49.182 [2024-11-18 13:24:19.089988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.182 [2024-11-18 13:24:19.092794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.182 [2024-11-18 13:24:19.092837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.182 pt2 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.182 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.183 [2024-11-18 13:24:19.101904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.183 [2024-11-18 13:24:19.104230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.183 [2024-11-18 13:24:19.104434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:49.183 [2024-11-18 13:24:19.104455] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:49.183 [2024-11-18 13:24:19.104753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.183 [2024-11-18 13:24:19.104945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:49.183 [2024-11-18 13:24:19.104965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:49.183 [2024-11-18 13:24:19.105162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.183 "name": "raid_bdev1", 00:07:49.183 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:49.183 "strip_size_kb": 64, 00:07:49.183 "state": "online", 00:07:49.183 "raid_level": "raid0", 00:07:49.183 "superblock": true, 00:07:49.183 "num_base_bdevs": 2, 00:07:49.183 "num_base_bdevs_discovered": 2, 00:07:49.183 "num_base_bdevs_operational": 2, 00:07:49.183 "base_bdevs_list": [ 00:07:49.183 { 00:07:49.183 "name": "pt1", 00:07:49.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.183 "is_configured": true, 00:07:49.183 "data_offset": 2048, 00:07:49.183 "data_size": 63488 00:07:49.183 }, 00:07:49.183 { 00:07:49.183 "name": "pt2", 00:07:49.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.183 "is_configured": true, 00:07:49.183 "data_offset": 2048, 00:07:49.183 "data_size": 63488 00:07:49.183 } 00:07:49.183 ] 00:07:49.183 }' 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.183 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 [2024-11-18 13:24:19.593561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:49.752 "name": "raid_bdev1", 00:07:49.752 "aliases": [ 00:07:49.752 "f6678c37-77fa-4a80-8fe5-bf1083ba018b" 00:07:49.752 ], 00:07:49.752 "product_name": "Raid Volume", 00:07:49.752 "block_size": 512, 00:07:49.752 "num_blocks": 126976, 00:07:49.752 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:49.752 "assigned_rate_limits": { 00:07:49.752 "rw_ios_per_sec": 0, 00:07:49.752 "rw_mbytes_per_sec": 0, 00:07:49.752 "r_mbytes_per_sec": 0, 00:07:49.752 "w_mbytes_per_sec": 0 00:07:49.752 }, 00:07:49.752 "claimed": false, 00:07:49.752 "zoned": false, 00:07:49.752 "supported_io_types": { 00:07:49.752 "read": true, 00:07:49.752 "write": true, 00:07:49.752 "unmap": true, 00:07:49.752 "flush": true, 00:07:49.752 "reset": true, 00:07:49.752 "nvme_admin": false, 00:07:49.752 "nvme_io": false, 00:07:49.752 "nvme_io_md": false, 00:07:49.752 "write_zeroes": true, 00:07:49.752 "zcopy": false, 00:07:49.752 "get_zone_info": false, 00:07:49.752 "zone_management": false, 00:07:49.752 "zone_append": false, 00:07:49.752 "compare": false, 00:07:49.752 "compare_and_write": false, 00:07:49.752 "abort": false, 00:07:49.752 "seek_hole": false, 00:07:49.752 "seek_data": false, 00:07:49.752 "copy": false, 00:07:49.752 "nvme_iov_md": false 00:07:49.752 }, 00:07:49.752 "memory_domains": [ 00:07:49.752 { 00:07:49.752 "dma_device_id": "system", 00:07:49.752 "dma_device_type": 1 00:07:49.752 }, 00:07:49.752 { 00:07:49.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.752 "dma_device_type": 2 00:07:49.752 }, 00:07:49.752 { 00:07:49.752 "dma_device_id": "system", 00:07:49.752 "dma_device_type": 1 00:07:49.752 }, 00:07:49.752 { 00:07:49.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.752 "dma_device_type": 2 00:07:49.752 } 00:07:49.752 ], 00:07:49.752 "driver_specific": { 00:07:49.752 "raid": { 00:07:49.752 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:49.752 "strip_size_kb": 64, 00:07:49.752 "state": "online", 00:07:49.752 "raid_level": "raid0", 00:07:49.752 "superblock": true, 00:07:49.752 "num_base_bdevs": 2, 00:07:49.752 "num_base_bdevs_discovered": 2, 00:07:49.752 "num_base_bdevs_operational": 2, 00:07:49.752 "base_bdevs_list": [ 00:07:49.752 { 00:07:49.752 "name": "pt1", 00:07:49.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.752 "is_configured": true, 00:07:49.752 "data_offset": 2048, 00:07:49.752 "data_size": 63488 00:07:49.752 }, 00:07:49.752 { 00:07:49.752 "name": "pt2", 00:07:49.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.752 "is_configured": true, 00:07:49.752 "data_offset": 2048, 00:07:49.752 "data_size": 63488 00:07:49.752 } 00:07:49.752 ] 00:07:49.752 } 00:07:49.752 } 00:07:49.752 }' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.752 pt2' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.752 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 [2024-11-18 13:24:19.837102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f6678c37-77fa-4a80-8fe5-bf1083ba018b 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f6678c37-77fa-4a80-8fe5-bf1083ba018b ']' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 [2024-11-18 13:24:19.864707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.012 [2024-11-18 13:24:19.864764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.012 [2024-11-18 13:24:19.864890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.012 [2024-11-18 13:24:19.864948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.012 [2024-11-18 13:24:19.864967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.012 13:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.012 [2024-11-18 13:24:20.000495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.012 [2024-11-18 13:24:20.002888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.012 [2024-11-18 13:24:20.002973] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.012 [2024-11-18 13:24:20.003035] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.013 [2024-11-18 13:24:20.003055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.013 [2024-11-18 13:24:20.003071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:50.013 request: 00:07:50.013 { 00:07:50.013 "name": "raid_bdev1", 00:07:50.013 "raid_level": "raid0", 00:07:50.013 "base_bdevs": [ 00:07:50.013 "malloc1", 00:07:50.013 "malloc2" 00:07:50.013 ], 00:07:50.013 "strip_size_kb": 64, 00:07:50.013 "superblock": false, 00:07:50.013 "method": "bdev_raid_create", 00:07:50.013 "req_id": 1 00:07:50.013 } 00:07:50.013 Got JSON-RPC error response 00:07:50.013 response: 00:07:50.013 { 00:07:50.013 "code": -17, 00:07:50.013 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.013 } 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.013 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.272 [2024-11-18 13:24:20.068352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.272 [2024-11-18 13:24:20.068425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.272 [2024-11-18 13:24:20.068449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.272 [2024-11-18 13:24:20.068462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.272 [2024-11-18 13:24:20.071039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.272 [2024-11-18 13:24:20.071082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.272 [2024-11-18 13:24:20.071211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.272 [2024-11-18 13:24:20.071287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.272 pt1 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.272 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.272 "name": "raid_bdev1", 00:07:50.272 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:50.272 "strip_size_kb": 64, 00:07:50.272 "state": "configuring", 00:07:50.272 "raid_level": "raid0", 00:07:50.272 "superblock": true, 00:07:50.272 "num_base_bdevs": 2, 00:07:50.272 "num_base_bdevs_discovered": 1, 00:07:50.272 "num_base_bdevs_operational": 2, 00:07:50.272 "base_bdevs_list": [ 00:07:50.272 { 00:07:50.272 "name": "pt1", 00:07:50.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.272 "is_configured": true, 00:07:50.272 "data_offset": 2048, 00:07:50.272 "data_size": 63488 00:07:50.272 }, 00:07:50.272 { 00:07:50.272 "name": null, 00:07:50.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.273 "is_configured": false, 00:07:50.273 "data_offset": 2048, 00:07:50.273 "data_size": 63488 00:07:50.273 } 00:07:50.273 ] 00:07:50.273 }' 00:07:50.273 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.273 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.532 [2024-11-18 13:24:20.495658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.532 [2024-11-18 13:24:20.495759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.532 [2024-11-18 13:24:20.495791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.532 [2024-11-18 13:24:20.495804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.532 [2024-11-18 13:24:20.496361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.532 [2024-11-18 13:24:20.496391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.532 [2024-11-18 13:24:20.496498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.532 [2024-11-18 13:24:20.496532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.532 [2024-11-18 13:24:20.496661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.532 [2024-11-18 13:24:20.496680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.532 [2024-11-18 13:24:20.496938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.532 [2024-11-18 13:24:20.497095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.532 [2024-11-18 13:24:20.497109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.532 [2024-11-18 13:24:20.497280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.532 pt2 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.532 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.533 "name": "raid_bdev1", 00:07:50.533 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:50.533 "strip_size_kb": 64, 00:07:50.533 "state": "online", 00:07:50.533 "raid_level": "raid0", 00:07:50.533 "superblock": true, 00:07:50.533 "num_base_bdevs": 2, 00:07:50.533 "num_base_bdevs_discovered": 2, 00:07:50.533 "num_base_bdevs_operational": 2, 00:07:50.533 "base_bdevs_list": [ 00:07:50.533 { 00:07:50.533 "name": "pt1", 00:07:50.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.533 "is_configured": true, 00:07:50.533 "data_offset": 2048, 00:07:50.533 "data_size": 63488 00:07:50.533 }, 00:07:50.533 { 00:07:50.533 "name": "pt2", 00:07:50.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.533 "is_configured": true, 00:07:50.533 "data_offset": 2048, 00:07:50.533 "data_size": 63488 00:07:50.533 } 00:07:50.533 ] 00:07:50.533 }' 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.533 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.102 13:24:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 [2024-11-18 13:24:20.983151] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.102 "name": "raid_bdev1", 00:07:51.102 "aliases": [ 00:07:51.102 "f6678c37-77fa-4a80-8fe5-bf1083ba018b" 00:07:51.102 ], 00:07:51.102 "product_name": "Raid Volume", 00:07:51.102 "block_size": 512, 00:07:51.102 "num_blocks": 126976, 00:07:51.102 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:51.102 "assigned_rate_limits": { 00:07:51.102 "rw_ios_per_sec": 0, 00:07:51.102 "rw_mbytes_per_sec": 0, 00:07:51.102 "r_mbytes_per_sec": 0, 00:07:51.102 "w_mbytes_per_sec": 0 00:07:51.102 }, 00:07:51.102 "claimed": false, 00:07:51.102 "zoned": false, 00:07:51.102 "supported_io_types": { 00:07:51.102 "read": true, 00:07:51.102 "write": true, 00:07:51.102 "unmap": true, 00:07:51.102 "flush": true, 00:07:51.102 "reset": true, 00:07:51.102 "nvme_admin": false, 00:07:51.102 "nvme_io": false, 00:07:51.102 "nvme_io_md": false, 00:07:51.102 "write_zeroes": true, 00:07:51.102 "zcopy": false, 00:07:51.102 "get_zone_info": false, 00:07:51.102 "zone_management": false, 00:07:51.102 "zone_append": false, 00:07:51.102 "compare": false, 00:07:51.102 "compare_and_write": false, 00:07:51.102 "abort": false, 00:07:51.102 "seek_hole": false, 00:07:51.102 "seek_data": false, 00:07:51.102 "copy": false, 00:07:51.102 "nvme_iov_md": false 00:07:51.102 }, 00:07:51.102 "memory_domains": [ 00:07:51.102 { 00:07:51.102 "dma_device_id": "system", 00:07:51.102 "dma_device_type": 1 00:07:51.102 }, 00:07:51.102 { 00:07:51.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.102 "dma_device_type": 2 00:07:51.102 }, 00:07:51.102 { 00:07:51.102 "dma_device_id": "system", 00:07:51.102 "dma_device_type": 1 00:07:51.102 }, 00:07:51.102 { 00:07:51.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.102 "dma_device_type": 2 00:07:51.102 } 00:07:51.102 ], 00:07:51.102 "driver_specific": { 00:07:51.102 "raid": { 00:07:51.102 "uuid": "f6678c37-77fa-4a80-8fe5-bf1083ba018b", 00:07:51.102 "strip_size_kb": 64, 00:07:51.102 "state": "online", 00:07:51.102 "raid_level": "raid0", 00:07:51.102 "superblock": true, 00:07:51.102 "num_base_bdevs": 2, 00:07:51.102 "num_base_bdevs_discovered": 2, 00:07:51.102 "num_base_bdevs_operational": 2, 00:07:51.102 "base_bdevs_list": [ 00:07:51.102 { 00:07:51.102 "name": "pt1", 00:07:51.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.102 "is_configured": true, 00:07:51.102 "data_offset": 2048, 00:07:51.102 "data_size": 63488 00:07:51.102 }, 00:07:51.102 { 00:07:51.102 "name": "pt2", 00:07:51.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.102 "is_configured": true, 00:07:51.102 "data_offset": 2048, 00:07:51.102 "data_size": 63488 00:07:51.102 } 00:07:51.102 ] 00:07:51.102 } 00:07:51.102 } 00:07:51.102 }' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.102 pt2' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.362 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.362 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.363 [2024-11-18 13:24:21.234719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f6678c37-77fa-4a80-8fe5-bf1083ba018b '!=' f6678c37-77fa-4a80-8fe5-bf1083ba018b ']' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61223 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61223 ']' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61223 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61223 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.363 killing process with pid 61223 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61223' 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61223 00:07:51.363 [2024-11-18 13:24:21.286178] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.363 [2024-11-18 13:24:21.286339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.363 13:24:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61223 00:07:51.363 [2024-11-18 13:24:21.286407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.363 [2024-11-18 13:24:21.286428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.622 [2024-11-18 13:24:21.522508] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.003 13:24:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.003 00:07:53.003 real 0m4.781s 00:07:53.003 user 0m6.580s 00:07:53.003 sys 0m0.885s 00:07:53.003 13:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.003 13:24:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.003 ************************************ 00:07:53.003 END TEST raid_superblock_test 00:07:53.003 ************************************ 00:07:53.003 13:24:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:53.003 13:24:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.003 13:24:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.003 13:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.003 ************************************ 00:07:53.003 START TEST raid_read_error_test 00:07:53.003 ************************************ 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YLBEZPpO6J 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61440 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61440 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61440 ']' 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.003 13:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.003 [2024-11-18 13:24:22.976056] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:53.003 [2024-11-18 13:24:22.976225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:07:53.262 [2024-11-18 13:24:23.157269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.263 [2024-11-18 13:24:23.300741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.521 [2024-11-18 13:24:23.553549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.521 [2024-11-18 13:24:23.553612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.779 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.038 BaseBdev1_malloc 00:07:54.038 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.038 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 true 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 [2024-11-18 13:24:23.886741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.039 [2024-11-18 13:24:23.886806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.039 [2024-11-18 13:24:23.886832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.039 [2024-11-18 13:24:23.886849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.039 [2024-11-18 13:24:23.889332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.039 [2024-11-18 13:24:23.889372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.039 BaseBdev1 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 BaseBdev2_malloc 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 true 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 [2024-11-18 13:24:23.957888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.039 [2024-11-18 13:24:23.957957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.039 [2024-11-18 13:24:23.957976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.039 [2024-11-18 13:24:23.957989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.039 [2024-11-18 13:24:23.960448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.039 [2024-11-18 13:24:23.960491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.039 BaseBdev2 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 [2024-11-18 13:24:23.969938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.039 [2024-11-18 13:24:23.972225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.039 [2024-11-18 13:24:23.972449] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.039 [2024-11-18 13:24:23.972472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.039 [2024-11-18 13:24:23.972739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.039 [2024-11-18 13:24:23.972952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.039 [2024-11-18 13:24:23.972972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:54.039 [2024-11-18 13:24:23.973160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.039 13:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.039 13:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.039 "name": "raid_bdev1", 00:07:54.039 "uuid": "77d3b0f9-ef69-4812-ab76-2ecfe16be395", 00:07:54.039 "strip_size_kb": 64, 00:07:54.039 "state": "online", 00:07:54.039 "raid_level": "raid0", 00:07:54.039 "superblock": true, 00:07:54.039 "num_base_bdevs": 2, 00:07:54.039 "num_base_bdevs_discovered": 2, 00:07:54.039 "num_base_bdevs_operational": 2, 00:07:54.039 "base_bdevs_list": [ 00:07:54.039 { 00:07:54.039 "name": "BaseBdev1", 00:07:54.039 "uuid": "ea2d4b74-16c7-582a-993e-8e1bfc235d10", 00:07:54.039 "is_configured": true, 00:07:54.039 "data_offset": 2048, 00:07:54.039 "data_size": 63488 00:07:54.039 }, 00:07:54.039 { 00:07:54.039 "name": "BaseBdev2", 00:07:54.039 "uuid": "037b451c-fa15-58c1-a361-6bee1cce60cc", 00:07:54.039 "is_configured": true, 00:07:54.039 "data_offset": 2048, 00:07:54.039 "data_size": 63488 00:07:54.039 } 00:07:54.039 ] 00:07:54.039 }' 00:07:54.039 13:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.039 13:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.608 13:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:54.608 13:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.608 [2024-11-18 13:24:24.522772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.546 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.547 "name": "raid_bdev1", 00:07:55.547 "uuid": "77d3b0f9-ef69-4812-ab76-2ecfe16be395", 00:07:55.547 "strip_size_kb": 64, 00:07:55.547 "state": "online", 00:07:55.547 "raid_level": "raid0", 00:07:55.547 "superblock": true, 00:07:55.547 "num_base_bdevs": 2, 00:07:55.547 "num_base_bdevs_discovered": 2, 00:07:55.547 "num_base_bdevs_operational": 2, 00:07:55.547 "base_bdevs_list": [ 00:07:55.547 { 00:07:55.547 "name": "BaseBdev1", 00:07:55.547 "uuid": "ea2d4b74-16c7-582a-993e-8e1bfc235d10", 00:07:55.547 "is_configured": true, 00:07:55.547 "data_offset": 2048, 00:07:55.547 "data_size": 63488 00:07:55.547 }, 00:07:55.547 { 00:07:55.547 "name": "BaseBdev2", 00:07:55.547 "uuid": "037b451c-fa15-58c1-a361-6bee1cce60cc", 00:07:55.547 "is_configured": true, 00:07:55.547 "data_offset": 2048, 00:07:55.547 "data_size": 63488 00:07:55.547 } 00:07:55.547 ] 00:07:55.547 }' 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.547 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 [2024-11-18 13:24:25.919770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.115 [2024-11-18 13:24:25.919884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.115 [2024-11-18 13:24:25.922839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.115 [2024-11-18 13:24:25.922933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.115 [2024-11-18 13:24:25.922975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.115 [2024-11-18 13:24:25.922989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.115 { 00:07:56.115 "results": [ 00:07:56.115 { 00:07:56.115 "job": "raid_bdev1", 00:07:56.115 "core_mask": "0x1", 00:07:56.115 "workload": "randrw", 00:07:56.115 "percentage": 50, 00:07:56.115 "status": "finished", 00:07:56.115 "queue_depth": 1, 00:07:56.115 "io_size": 131072, 00:07:56.115 "runtime": 1.397674, 00:07:56.115 "iops": 13406.559755708413, 00:07:56.115 "mibps": 1675.8199694635516, 00:07:56.115 "io_failed": 1, 00:07:56.115 "io_timeout": 0, 00:07:56.115 "avg_latency_us": 104.88145261814151, 00:07:56.115 "min_latency_us": 26.606113537117903, 00:07:56.115 "max_latency_us": 1488.1537117903931 00:07:56.115 } 00:07:56.115 ], 00:07:56.115 "core_count": 1 00:07:56.115 } 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61440 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61440 ']' 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61440 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61440 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.115 killing process with pid 61440 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61440' 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61440 00:07:56.115 [2024-11-18 13:24:25.965223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.115 13:24:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61440 00:07:56.115 [2024-11-18 13:24:26.122585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YLBEZPpO6J 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:57.509 ************************************ 00:07:57.509 END TEST raid_read_error_test 00:07:57.509 ************************************ 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:57.509 00:07:57.509 real 0m4.587s 00:07:57.509 user 0m5.379s 00:07:57.509 sys 0m0.667s 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.509 13:24:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.509 13:24:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:57.509 13:24:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.509 13:24:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.509 13:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.509 ************************************ 00:07:57.509 START TEST raid_write_error_test 00:07:57.509 ************************************ 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ipP9n7Tdlg 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61580 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61580 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61580 ']' 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.509 13:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.769 [2024-11-18 13:24:27.635368] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:57.769 [2024-11-18 13:24:27.635530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:07:57.769 [2024-11-18 13:24:27.819316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.028 [2024-11-18 13:24:27.963743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.287 [2024-11-18 13:24:28.201601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.287 [2024-11-18 13:24:28.201660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.545 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.545 BaseBdev1_malloc 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.546 true 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.546 [2024-11-18 13:24:28.556594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.546 [2024-11-18 13:24:28.556719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.546 [2024-11-18 13:24:28.556747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.546 [2024-11-18 13:24:28.556760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.546 [2024-11-18 13:24:28.559243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.546 [2024-11-18 13:24:28.559284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.546 BaseBdev1 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.546 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.805 BaseBdev2_malloc 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.805 true 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.805 [2024-11-18 13:24:28.630542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.805 [2024-11-18 13:24:28.630653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.805 [2024-11-18 13:24:28.630689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.805 [2024-11-18 13:24:28.630727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.805 [2024-11-18 13:24:28.633124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.805 [2024-11-18 13:24:28.633213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.805 BaseBdev2 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.805 [2024-11-18 13:24:28.642599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.805 [2024-11-18 13:24:28.644828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.805 [2024-11-18 13:24:28.645106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.805 [2024-11-18 13:24:28.645139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.805 [2024-11-18 13:24:28.645412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.805 [2024-11-18 13:24:28.645621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.805 [2024-11-18 13:24:28.645634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.805 [2024-11-18 13:24:28.645806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.805 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.805 "name": "raid_bdev1", 00:07:58.805 "uuid": "820b4b30-2c7e-44e9-a6a8-b8af55777e99", 00:07:58.805 "strip_size_kb": 64, 00:07:58.805 "state": "online", 00:07:58.805 "raid_level": "raid0", 00:07:58.805 "superblock": true, 00:07:58.805 "num_base_bdevs": 2, 00:07:58.805 "num_base_bdevs_discovered": 2, 00:07:58.805 "num_base_bdevs_operational": 2, 00:07:58.805 "base_bdevs_list": [ 00:07:58.805 { 00:07:58.805 "name": "BaseBdev1", 00:07:58.805 "uuid": "b0d52903-d1dc-533b-8ea3-faa433b8dc99", 00:07:58.805 "is_configured": true, 00:07:58.805 "data_offset": 2048, 00:07:58.805 "data_size": 63488 00:07:58.805 }, 00:07:58.806 { 00:07:58.806 "name": "BaseBdev2", 00:07:58.806 "uuid": "68eb8731-321d-5b2b-942d-f74832cb3e22", 00:07:58.806 "is_configured": true, 00:07:58.806 "data_offset": 2048, 00:07:58.806 "data_size": 63488 00:07:58.806 } 00:07:58.806 ] 00:07:58.806 }' 00:07:58.806 13:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.806 13:24:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.065 13:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.065 13:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.323 [2024-11-18 13:24:29.179272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.262 "name": "raid_bdev1", 00:08:00.262 "uuid": "820b4b30-2c7e-44e9-a6a8-b8af55777e99", 00:08:00.262 "strip_size_kb": 64, 00:08:00.262 "state": "online", 00:08:00.262 "raid_level": "raid0", 00:08:00.262 "superblock": true, 00:08:00.262 "num_base_bdevs": 2, 00:08:00.262 "num_base_bdevs_discovered": 2, 00:08:00.262 "num_base_bdevs_operational": 2, 00:08:00.262 "base_bdevs_list": [ 00:08:00.262 { 00:08:00.262 "name": "BaseBdev1", 00:08:00.262 "uuid": "b0d52903-d1dc-533b-8ea3-faa433b8dc99", 00:08:00.262 "is_configured": true, 00:08:00.262 "data_offset": 2048, 00:08:00.262 "data_size": 63488 00:08:00.262 }, 00:08:00.262 { 00:08:00.262 "name": "BaseBdev2", 00:08:00.262 "uuid": "68eb8731-321d-5b2b-942d-f74832cb3e22", 00:08:00.262 "is_configured": true, 00:08:00.262 "data_offset": 2048, 00:08:00.262 "data_size": 63488 00:08:00.262 } 00:08:00.262 ] 00:08:00.262 }' 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.262 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.829 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.829 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.829 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.829 [2024-11-18 13:24:30.608923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.829 [2024-11-18 13:24:30.609039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.830 [2024-11-18 13:24:30.612256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.830 [2024-11-18 13:24:30.612354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.830 [2024-11-18 13:24:30.612417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.830 [2024-11-18 13:24:30.612472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.830 { 00:08:00.830 "results": [ 00:08:00.830 { 00:08:00.830 "job": "raid_bdev1", 00:08:00.830 "core_mask": "0x1", 00:08:00.830 "workload": "randrw", 00:08:00.830 "percentage": 50, 00:08:00.830 "status": "finished", 00:08:00.830 "queue_depth": 1, 00:08:00.830 "io_size": 131072, 00:08:00.830 "runtime": 1.430123, 00:08:00.830 "iops": 13467.373086091196, 00:08:00.830 "mibps": 1683.4216357613996, 00:08:00.830 "io_failed": 1, 00:08:00.830 "io_timeout": 0, 00:08:00.830 "avg_latency_us": 104.45725142259774, 00:08:00.830 "min_latency_us": 27.053275109170304, 00:08:00.830 "max_latency_us": 1531.0812227074236 00:08:00.830 } 00:08:00.830 ], 00:08:00.830 "core_count": 1 00:08:00.830 } 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61580 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61580 ']' 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61580 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61580 00:08:00.830 killing process with pid 61580 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61580' 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61580 00:08:00.830 [2024-11-18 13:24:30.657461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.830 13:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61580 00:08:00.830 [2024-11-18 13:24:30.811205] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ipP9n7Tdlg 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:02.205 00:08:02.205 real 0m4.631s 00:08:02.205 user 0m5.435s 00:08:02.205 sys 0m0.671s 00:08:02.205 ************************************ 00:08:02.205 END TEST raid_write_error_test 00:08:02.205 ************************************ 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.205 13:24:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.205 13:24:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.205 13:24:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:02.205 13:24:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.205 13:24:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.205 13:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.205 ************************************ 00:08:02.205 START TEST raid_state_function_test 00:08:02.205 ************************************ 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61724 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61724' 00:08:02.205 Process raid pid: 61724 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61724 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61724 ']' 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.205 13:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.464 [2024-11-18 13:24:32.329925] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:02.464 [2024-11-18 13:24:32.330213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.464 [2024-11-18 13:24:32.513827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.723 [2024-11-18 13:24:32.664230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.982 [2024-11-18 13:24:32.916322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.982 [2024-11-18 13:24:32.916501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.242 [2024-11-18 13:24:33.218160] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.242 [2024-11-18 13:24:33.218228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.242 [2024-11-18 13:24:33.218240] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.242 [2024-11-18 13:24:33.218253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.242 "name": "Existed_Raid", 00:08:03.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.242 "strip_size_kb": 64, 00:08:03.242 "state": "configuring", 00:08:03.242 "raid_level": "concat", 00:08:03.242 "superblock": false, 00:08:03.242 "num_base_bdevs": 2, 00:08:03.242 "num_base_bdevs_discovered": 0, 00:08:03.242 "num_base_bdevs_operational": 2, 00:08:03.242 "base_bdevs_list": [ 00:08:03.242 { 00:08:03.242 "name": "BaseBdev1", 00:08:03.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.242 "is_configured": false, 00:08:03.242 "data_offset": 0, 00:08:03.242 "data_size": 0 00:08:03.242 }, 00:08:03.242 { 00:08:03.242 "name": "BaseBdev2", 00:08:03.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.242 "is_configured": false, 00:08:03.242 "data_offset": 0, 00:08:03.242 "data_size": 0 00:08:03.242 } 00:08:03.242 ] 00:08:03.242 }' 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.242 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 [2024-11-18 13:24:33.673345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.811 [2024-11-18 13:24:33.673473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 [2024-11-18 13:24:33.685350] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.811 [2024-11-18 13:24:33.685471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.811 [2024-11-18 13:24:33.685506] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.811 [2024-11-18 13:24:33.685537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 [2024-11-18 13:24:33.742469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.811 BaseBdev1 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 [ 00:08:03.811 { 00:08:03.811 "name": "BaseBdev1", 00:08:03.811 "aliases": [ 00:08:03.811 "b7111f8e-cd58-4f6b-bc69-92ef37ef877e" 00:08:03.811 ], 00:08:03.811 "product_name": "Malloc disk", 00:08:03.811 "block_size": 512, 00:08:03.811 "num_blocks": 65536, 00:08:03.811 "uuid": "b7111f8e-cd58-4f6b-bc69-92ef37ef877e", 00:08:03.811 "assigned_rate_limits": { 00:08:03.811 "rw_ios_per_sec": 0, 00:08:03.811 "rw_mbytes_per_sec": 0, 00:08:03.811 "r_mbytes_per_sec": 0, 00:08:03.811 "w_mbytes_per_sec": 0 00:08:03.811 }, 00:08:03.811 "claimed": true, 00:08:03.811 "claim_type": "exclusive_write", 00:08:03.811 "zoned": false, 00:08:03.811 "supported_io_types": { 00:08:03.811 "read": true, 00:08:03.811 "write": true, 00:08:03.811 "unmap": true, 00:08:03.811 "flush": true, 00:08:03.811 "reset": true, 00:08:03.811 "nvme_admin": false, 00:08:03.811 "nvme_io": false, 00:08:03.811 "nvme_io_md": false, 00:08:03.811 "write_zeroes": true, 00:08:03.811 "zcopy": true, 00:08:03.811 "get_zone_info": false, 00:08:03.811 "zone_management": false, 00:08:03.811 "zone_append": false, 00:08:03.811 "compare": false, 00:08:03.811 "compare_and_write": false, 00:08:03.811 "abort": true, 00:08:03.811 "seek_hole": false, 00:08:03.811 "seek_data": false, 00:08:03.811 "copy": true, 00:08:03.811 "nvme_iov_md": false 00:08:03.811 }, 00:08:03.811 "memory_domains": [ 00:08:03.811 { 00:08:03.811 "dma_device_id": "system", 00:08:03.811 "dma_device_type": 1 00:08:03.811 }, 00:08:03.811 { 00:08:03.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.811 "dma_device_type": 2 00:08:03.811 } 00:08:03.811 ], 00:08:03.811 "driver_specific": {} 00:08:03.811 } 00:08:03.811 ] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.811 "name": "Existed_Raid", 00:08:03.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.811 "strip_size_kb": 64, 00:08:03.811 "state": "configuring", 00:08:03.811 "raid_level": "concat", 00:08:03.811 "superblock": false, 00:08:03.811 "num_base_bdevs": 2, 00:08:03.811 "num_base_bdevs_discovered": 1, 00:08:03.811 "num_base_bdevs_operational": 2, 00:08:03.811 "base_bdevs_list": [ 00:08:03.811 { 00:08:03.811 "name": "BaseBdev1", 00:08:03.811 "uuid": "b7111f8e-cd58-4f6b-bc69-92ef37ef877e", 00:08:03.811 "is_configured": true, 00:08:03.811 "data_offset": 0, 00:08:03.811 "data_size": 65536 00:08:03.811 }, 00:08:03.811 { 00:08:03.811 "name": "BaseBdev2", 00:08:03.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.811 "is_configured": false, 00:08:03.811 "data_offset": 0, 00:08:03.811 "data_size": 0 00:08:03.811 } 00:08:03.811 ] 00:08:03.811 }' 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.811 13:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.380 [2024-11-18 13:24:34.269684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.380 [2024-11-18 13:24:34.269763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.380 [2024-11-18 13:24:34.281712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.380 [2024-11-18 13:24:34.284175] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.380 [2024-11-18 13:24:34.284222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.380 "name": "Existed_Raid", 00:08:04.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.380 "strip_size_kb": 64, 00:08:04.380 "state": "configuring", 00:08:04.380 "raid_level": "concat", 00:08:04.380 "superblock": false, 00:08:04.380 "num_base_bdevs": 2, 00:08:04.380 "num_base_bdevs_discovered": 1, 00:08:04.380 "num_base_bdevs_operational": 2, 00:08:04.380 "base_bdevs_list": [ 00:08:04.380 { 00:08:04.380 "name": "BaseBdev1", 00:08:04.380 "uuid": "b7111f8e-cd58-4f6b-bc69-92ef37ef877e", 00:08:04.380 "is_configured": true, 00:08:04.380 "data_offset": 0, 00:08:04.380 "data_size": 65536 00:08:04.380 }, 00:08:04.380 { 00:08:04.380 "name": "BaseBdev2", 00:08:04.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.380 "is_configured": false, 00:08:04.380 "data_offset": 0, 00:08:04.380 "data_size": 0 00:08:04.380 } 00:08:04.380 ] 00:08:04.380 }' 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.380 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 [2024-11-18 13:24:34.793965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.950 [2024-11-18 13:24:34.794191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.950 [2024-11-18 13:24:34.794222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:04.950 [2024-11-18 13:24:34.794584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.950 [2024-11-18 13:24:34.794823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.950 [2024-11-18 13:24:34.794872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:04.950 [2024-11-18 13:24:34.795270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.950 BaseBdev2 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.950 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.950 [ 00:08:04.950 { 00:08:04.950 "name": "BaseBdev2", 00:08:04.950 "aliases": [ 00:08:04.950 "bec381e2-b9c9-46d3-b6fe-7f41a2e1c86a" 00:08:04.950 ], 00:08:04.950 "product_name": "Malloc disk", 00:08:04.950 "block_size": 512, 00:08:04.950 "num_blocks": 65536, 00:08:04.950 "uuid": "bec381e2-b9c9-46d3-b6fe-7f41a2e1c86a", 00:08:04.950 "assigned_rate_limits": { 00:08:04.950 "rw_ios_per_sec": 0, 00:08:04.950 "rw_mbytes_per_sec": 0, 00:08:04.950 "r_mbytes_per_sec": 0, 00:08:04.950 "w_mbytes_per_sec": 0 00:08:04.950 }, 00:08:04.950 "claimed": true, 00:08:04.950 "claim_type": "exclusive_write", 00:08:04.950 "zoned": false, 00:08:04.950 "supported_io_types": { 00:08:04.950 "read": true, 00:08:04.950 "write": true, 00:08:04.950 "unmap": true, 00:08:04.950 "flush": true, 00:08:04.950 "reset": true, 00:08:04.950 "nvme_admin": false, 00:08:04.950 "nvme_io": false, 00:08:04.950 "nvme_io_md": false, 00:08:04.950 "write_zeroes": true, 00:08:04.950 "zcopy": true, 00:08:04.950 "get_zone_info": false, 00:08:04.950 "zone_management": false, 00:08:04.950 "zone_append": false, 00:08:04.950 "compare": false, 00:08:04.950 "compare_and_write": false, 00:08:04.950 "abort": true, 00:08:04.950 "seek_hole": false, 00:08:04.950 "seek_data": false, 00:08:04.950 "copy": true, 00:08:04.950 "nvme_iov_md": false 00:08:04.950 }, 00:08:04.950 "memory_domains": [ 00:08:04.950 { 00:08:04.950 "dma_device_id": "system", 00:08:04.950 "dma_device_type": 1 00:08:04.951 }, 00:08:04.951 { 00:08:04.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.951 "dma_device_type": 2 00:08:04.951 } 00:08:04.951 ], 00:08:04.951 "driver_specific": {} 00:08:04.951 } 00:08:04.951 ] 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.951 "name": "Existed_Raid", 00:08:04.951 "uuid": "0792ac0f-0eed-4381-80c3-2e2e9313ad6a", 00:08:04.951 "strip_size_kb": 64, 00:08:04.951 "state": "online", 00:08:04.951 "raid_level": "concat", 00:08:04.951 "superblock": false, 00:08:04.951 "num_base_bdevs": 2, 00:08:04.951 "num_base_bdevs_discovered": 2, 00:08:04.951 "num_base_bdevs_operational": 2, 00:08:04.951 "base_bdevs_list": [ 00:08:04.951 { 00:08:04.951 "name": "BaseBdev1", 00:08:04.951 "uuid": "b7111f8e-cd58-4f6b-bc69-92ef37ef877e", 00:08:04.951 "is_configured": true, 00:08:04.951 "data_offset": 0, 00:08:04.951 "data_size": 65536 00:08:04.951 }, 00:08:04.951 { 00:08:04.951 "name": "BaseBdev2", 00:08:04.951 "uuid": "bec381e2-b9c9-46d3-b6fe-7f41a2e1c86a", 00:08:04.951 "is_configured": true, 00:08:04.951 "data_offset": 0, 00:08:04.951 "data_size": 65536 00:08:04.951 } 00:08:04.951 ] 00:08:04.951 }' 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.951 13:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 [2024-11-18 13:24:35.329437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.521 "name": "Existed_Raid", 00:08:05.521 "aliases": [ 00:08:05.521 "0792ac0f-0eed-4381-80c3-2e2e9313ad6a" 00:08:05.521 ], 00:08:05.521 "product_name": "Raid Volume", 00:08:05.521 "block_size": 512, 00:08:05.521 "num_blocks": 131072, 00:08:05.521 "uuid": "0792ac0f-0eed-4381-80c3-2e2e9313ad6a", 00:08:05.521 "assigned_rate_limits": { 00:08:05.521 "rw_ios_per_sec": 0, 00:08:05.521 "rw_mbytes_per_sec": 0, 00:08:05.521 "r_mbytes_per_sec": 0, 00:08:05.521 "w_mbytes_per_sec": 0 00:08:05.521 }, 00:08:05.521 "claimed": false, 00:08:05.521 "zoned": false, 00:08:05.521 "supported_io_types": { 00:08:05.521 "read": true, 00:08:05.521 "write": true, 00:08:05.521 "unmap": true, 00:08:05.521 "flush": true, 00:08:05.521 "reset": true, 00:08:05.521 "nvme_admin": false, 00:08:05.521 "nvme_io": false, 00:08:05.521 "nvme_io_md": false, 00:08:05.521 "write_zeroes": true, 00:08:05.521 "zcopy": false, 00:08:05.521 "get_zone_info": false, 00:08:05.521 "zone_management": false, 00:08:05.521 "zone_append": false, 00:08:05.521 "compare": false, 00:08:05.521 "compare_and_write": false, 00:08:05.521 "abort": false, 00:08:05.521 "seek_hole": false, 00:08:05.521 "seek_data": false, 00:08:05.521 "copy": false, 00:08:05.521 "nvme_iov_md": false 00:08:05.521 }, 00:08:05.521 "memory_domains": [ 00:08:05.521 { 00:08:05.521 "dma_device_id": "system", 00:08:05.521 "dma_device_type": 1 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.521 "dma_device_type": 2 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "dma_device_id": "system", 00:08:05.521 "dma_device_type": 1 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.521 "dma_device_type": 2 00:08:05.521 } 00:08:05.521 ], 00:08:05.521 "driver_specific": { 00:08:05.521 "raid": { 00:08:05.521 "uuid": "0792ac0f-0eed-4381-80c3-2e2e9313ad6a", 00:08:05.521 "strip_size_kb": 64, 00:08:05.521 "state": "online", 00:08:05.521 "raid_level": "concat", 00:08:05.521 "superblock": false, 00:08:05.521 "num_base_bdevs": 2, 00:08:05.521 "num_base_bdevs_discovered": 2, 00:08:05.521 "num_base_bdevs_operational": 2, 00:08:05.521 "base_bdevs_list": [ 00:08:05.521 { 00:08:05.521 "name": "BaseBdev1", 00:08:05.521 "uuid": "b7111f8e-cd58-4f6b-bc69-92ef37ef877e", 00:08:05.521 "is_configured": true, 00:08:05.521 "data_offset": 0, 00:08:05.521 "data_size": 65536 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "name": "BaseBdev2", 00:08:05.521 "uuid": "bec381e2-b9c9-46d3-b6fe-7f41a2e1c86a", 00:08:05.521 "is_configured": true, 00:08:05.521 "data_offset": 0, 00:08:05.521 "data_size": 65536 00:08:05.521 } 00:08:05.521 ] 00:08:05.521 } 00:08:05.521 } 00:08:05.521 }' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.521 BaseBdev2' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.521 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.522 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.522 [2024-11-18 13:24:35.564913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.522 [2024-11-18 13:24:35.564963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.522 [2024-11-18 13:24:35.565030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.782 "name": "Existed_Raid", 00:08:05.782 "uuid": "0792ac0f-0eed-4381-80c3-2e2e9313ad6a", 00:08:05.782 "strip_size_kb": 64, 00:08:05.782 "state": "offline", 00:08:05.782 "raid_level": "concat", 00:08:05.782 "superblock": false, 00:08:05.782 "num_base_bdevs": 2, 00:08:05.782 "num_base_bdevs_discovered": 1, 00:08:05.782 "num_base_bdevs_operational": 1, 00:08:05.782 "base_bdevs_list": [ 00:08:05.782 { 00:08:05.782 "name": null, 00:08:05.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.782 "is_configured": false, 00:08:05.782 "data_offset": 0, 00:08:05.782 "data_size": 65536 00:08:05.782 }, 00:08:05.782 { 00:08:05.782 "name": "BaseBdev2", 00:08:05.782 "uuid": "bec381e2-b9c9-46d3-b6fe-7f41a2e1c86a", 00:08:05.782 "is_configured": true, 00:08:05.782 "data_offset": 0, 00:08:05.782 "data_size": 65536 00:08:05.782 } 00:08:05.782 ] 00:08:05.782 }' 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.782 13:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.351 [2024-11-18 13:24:36.172710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.351 [2024-11-18 13:24:36.172784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.351 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61724 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61724 ']' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61724 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61724 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.352 killing process with pid 61724 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61724' 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61724 00:08:06.352 [2024-11-18 13:24:36.375785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.352 13:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61724 00:08:06.352 [2024-11-18 13:24:36.393687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.732 00:08:07.732 real 0m5.419s 00:08:07.732 user 0m7.687s 00:08:07.732 sys 0m1.000s 00:08:07.732 ************************************ 00:08:07.732 END TEST raid_state_function_test 00:08:07.732 ************************************ 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.732 13:24:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:07.732 13:24:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.732 13:24:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.732 13:24:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.732 ************************************ 00:08:07.732 START TEST raid_state_function_test_sb 00:08:07.732 ************************************ 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61977 00:08:07.732 Process raid pid: 61977 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61977' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61977 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61977 ']' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.732 13:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.992 [2024-11-18 13:24:37.817108] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:07.992 [2024-11-18 13:24:37.817260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.992 [2024-11-18 13:24:38.000143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.252 [2024-11-18 13:24:38.144581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.512 [2024-11-18 13:24:38.386391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.512 [2024-11-18 13:24:38.386460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.772 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.772 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:08.772 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 [2024-11-18 13:24:38.703477] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.773 [2024-11-18 13:24:38.703546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.773 [2024-11-18 13:24:38.703558] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.773 [2024-11-18 13:24:38.703568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.773 "name": "Existed_Raid", 00:08:08.773 "uuid": "f5f14a5d-d27c-4503-9385-0f91752a6442", 00:08:08.773 "strip_size_kb": 64, 00:08:08.773 "state": "configuring", 00:08:08.773 "raid_level": "concat", 00:08:08.773 "superblock": true, 00:08:08.773 "num_base_bdevs": 2, 00:08:08.773 "num_base_bdevs_discovered": 0, 00:08:08.773 "num_base_bdevs_operational": 2, 00:08:08.773 "base_bdevs_list": [ 00:08:08.773 { 00:08:08.773 "name": "BaseBdev1", 00:08:08.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.773 "is_configured": false, 00:08:08.773 "data_offset": 0, 00:08:08.773 "data_size": 0 00:08:08.773 }, 00:08:08.773 { 00:08:08.773 "name": "BaseBdev2", 00:08:08.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.773 "is_configured": false, 00:08:08.773 "data_offset": 0, 00:08:08.773 "data_size": 0 00:08:08.773 } 00:08:08.773 ] 00:08:08.773 }' 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.773 13:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 [2024-11-18 13:24:39.178596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.349 [2024-11-18 13:24:39.178650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 [2024-11-18 13:24:39.190566] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.349 [2024-11-18 13:24:39.190619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.349 [2024-11-18 13:24:39.190630] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.349 [2024-11-18 13:24:39.190644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 [2024-11-18 13:24:39.245405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.349 BaseBdev1 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 [ 00:08:09.349 { 00:08:09.349 "name": "BaseBdev1", 00:08:09.349 "aliases": [ 00:08:09.349 "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72" 00:08:09.349 ], 00:08:09.349 "product_name": "Malloc disk", 00:08:09.349 "block_size": 512, 00:08:09.349 "num_blocks": 65536, 00:08:09.349 "uuid": "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72", 00:08:09.349 "assigned_rate_limits": { 00:08:09.349 "rw_ios_per_sec": 0, 00:08:09.349 "rw_mbytes_per_sec": 0, 00:08:09.349 "r_mbytes_per_sec": 0, 00:08:09.349 "w_mbytes_per_sec": 0 00:08:09.349 }, 00:08:09.349 "claimed": true, 00:08:09.349 "claim_type": "exclusive_write", 00:08:09.349 "zoned": false, 00:08:09.349 "supported_io_types": { 00:08:09.349 "read": true, 00:08:09.349 "write": true, 00:08:09.349 "unmap": true, 00:08:09.349 "flush": true, 00:08:09.349 "reset": true, 00:08:09.349 "nvme_admin": false, 00:08:09.349 "nvme_io": false, 00:08:09.349 "nvme_io_md": false, 00:08:09.349 "write_zeroes": true, 00:08:09.349 "zcopy": true, 00:08:09.349 "get_zone_info": false, 00:08:09.349 "zone_management": false, 00:08:09.349 "zone_append": false, 00:08:09.349 "compare": false, 00:08:09.349 "compare_and_write": false, 00:08:09.349 "abort": true, 00:08:09.349 "seek_hole": false, 00:08:09.349 "seek_data": false, 00:08:09.349 "copy": true, 00:08:09.349 "nvme_iov_md": false 00:08:09.349 }, 00:08:09.349 "memory_domains": [ 00:08:09.349 { 00:08:09.349 "dma_device_id": "system", 00:08:09.349 "dma_device_type": 1 00:08:09.349 }, 00:08:09.349 { 00:08:09.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.349 "dma_device_type": 2 00:08:09.349 } 00:08:09.349 ], 00:08:09.349 "driver_specific": {} 00:08:09.349 } 00:08:09.349 ] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.349 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.349 "name": "Existed_Raid", 00:08:09.349 "uuid": "45a5fb33-7a8f-4542-8530-214c98fd6a02", 00:08:09.349 "strip_size_kb": 64, 00:08:09.349 "state": "configuring", 00:08:09.349 "raid_level": "concat", 00:08:09.350 "superblock": true, 00:08:09.350 "num_base_bdevs": 2, 00:08:09.350 "num_base_bdevs_discovered": 1, 00:08:09.350 "num_base_bdevs_operational": 2, 00:08:09.350 "base_bdevs_list": [ 00:08:09.350 { 00:08:09.350 "name": "BaseBdev1", 00:08:09.350 "uuid": "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72", 00:08:09.350 "is_configured": true, 00:08:09.350 "data_offset": 2048, 00:08:09.350 "data_size": 63488 00:08:09.350 }, 00:08:09.350 { 00:08:09.350 "name": "BaseBdev2", 00:08:09.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.350 "is_configured": false, 00:08:09.350 "data_offset": 0, 00:08:09.350 "data_size": 0 00:08:09.350 } 00:08:09.350 ] 00:08:09.350 }' 00:08:09.350 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.350 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.930 [2024-11-18 13:24:39.756607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.930 [2024-11-18 13:24:39.756686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.930 [2024-11-18 13:24:39.764663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.930 [2024-11-18 13:24:39.766905] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.930 [2024-11-18 13:24:39.766954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.930 "name": "Existed_Raid", 00:08:09.930 "uuid": "f39a0839-2969-4ad4-9952-35116cf751f8", 00:08:09.930 "strip_size_kb": 64, 00:08:09.930 "state": "configuring", 00:08:09.930 "raid_level": "concat", 00:08:09.930 "superblock": true, 00:08:09.930 "num_base_bdevs": 2, 00:08:09.930 "num_base_bdevs_discovered": 1, 00:08:09.930 "num_base_bdevs_operational": 2, 00:08:09.930 "base_bdevs_list": [ 00:08:09.930 { 00:08:09.930 "name": "BaseBdev1", 00:08:09.930 "uuid": "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72", 00:08:09.930 "is_configured": true, 00:08:09.930 "data_offset": 2048, 00:08:09.930 "data_size": 63488 00:08:09.930 }, 00:08:09.930 { 00:08:09.930 "name": "BaseBdev2", 00:08:09.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.930 "is_configured": false, 00:08:09.930 "data_offset": 0, 00:08:09.930 "data_size": 0 00:08:09.930 } 00:08:09.930 ] 00:08:09.930 }' 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.930 13:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.203 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.203 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.203 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.463 [2024-11-18 13:24:40.275867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.463 [2024-11-18 13:24:40.276180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.463 [2024-11-18 13:24:40.276197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.463 [2024-11-18 13:24:40.276539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.463 [2024-11-18 13:24:40.276705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.463 [2024-11-18 13:24:40.276719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.463 [2024-11-18 13:24:40.276878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.463 BaseBdev2 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.463 [ 00:08:10.463 { 00:08:10.463 "name": "BaseBdev2", 00:08:10.463 "aliases": [ 00:08:10.463 "f62548e8-2cf0-4b6b-8c7b-1b9c542e39ff" 00:08:10.463 ], 00:08:10.463 "product_name": "Malloc disk", 00:08:10.463 "block_size": 512, 00:08:10.463 "num_blocks": 65536, 00:08:10.463 "uuid": "f62548e8-2cf0-4b6b-8c7b-1b9c542e39ff", 00:08:10.463 "assigned_rate_limits": { 00:08:10.463 "rw_ios_per_sec": 0, 00:08:10.463 "rw_mbytes_per_sec": 0, 00:08:10.463 "r_mbytes_per_sec": 0, 00:08:10.463 "w_mbytes_per_sec": 0 00:08:10.463 }, 00:08:10.463 "claimed": true, 00:08:10.463 "claim_type": "exclusive_write", 00:08:10.463 "zoned": false, 00:08:10.463 "supported_io_types": { 00:08:10.463 "read": true, 00:08:10.463 "write": true, 00:08:10.463 "unmap": true, 00:08:10.463 "flush": true, 00:08:10.463 "reset": true, 00:08:10.463 "nvme_admin": false, 00:08:10.463 "nvme_io": false, 00:08:10.463 "nvme_io_md": false, 00:08:10.463 "write_zeroes": true, 00:08:10.463 "zcopy": true, 00:08:10.463 "get_zone_info": false, 00:08:10.463 "zone_management": false, 00:08:10.463 "zone_append": false, 00:08:10.463 "compare": false, 00:08:10.463 "compare_and_write": false, 00:08:10.463 "abort": true, 00:08:10.463 "seek_hole": false, 00:08:10.463 "seek_data": false, 00:08:10.463 "copy": true, 00:08:10.463 "nvme_iov_md": false 00:08:10.463 }, 00:08:10.463 "memory_domains": [ 00:08:10.463 { 00:08:10.463 "dma_device_id": "system", 00:08:10.463 "dma_device_type": 1 00:08:10.463 }, 00:08:10.463 { 00:08:10.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.463 "dma_device_type": 2 00:08:10.463 } 00:08:10.463 ], 00:08:10.463 "driver_specific": {} 00:08:10.463 } 00:08:10.463 ] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.463 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.463 "name": "Existed_Raid", 00:08:10.463 "uuid": "f39a0839-2969-4ad4-9952-35116cf751f8", 00:08:10.463 "strip_size_kb": 64, 00:08:10.463 "state": "online", 00:08:10.463 "raid_level": "concat", 00:08:10.463 "superblock": true, 00:08:10.463 "num_base_bdevs": 2, 00:08:10.463 "num_base_bdevs_discovered": 2, 00:08:10.463 "num_base_bdevs_operational": 2, 00:08:10.463 "base_bdevs_list": [ 00:08:10.463 { 00:08:10.463 "name": "BaseBdev1", 00:08:10.463 "uuid": "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72", 00:08:10.463 "is_configured": true, 00:08:10.463 "data_offset": 2048, 00:08:10.463 "data_size": 63488 00:08:10.463 }, 00:08:10.463 { 00:08:10.463 "name": "BaseBdev2", 00:08:10.463 "uuid": "f62548e8-2cf0-4b6b-8c7b-1b9c542e39ff", 00:08:10.463 "is_configured": true, 00:08:10.463 "data_offset": 2048, 00:08:10.464 "data_size": 63488 00:08:10.464 } 00:08:10.464 ] 00:08:10.464 }' 00:08:10.464 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.464 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.723 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.723 [2024-11-18 13:24:40.751462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.724 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.984 "name": "Existed_Raid", 00:08:10.984 "aliases": [ 00:08:10.984 "f39a0839-2969-4ad4-9952-35116cf751f8" 00:08:10.984 ], 00:08:10.984 "product_name": "Raid Volume", 00:08:10.984 "block_size": 512, 00:08:10.984 "num_blocks": 126976, 00:08:10.984 "uuid": "f39a0839-2969-4ad4-9952-35116cf751f8", 00:08:10.984 "assigned_rate_limits": { 00:08:10.984 "rw_ios_per_sec": 0, 00:08:10.984 "rw_mbytes_per_sec": 0, 00:08:10.984 "r_mbytes_per_sec": 0, 00:08:10.984 "w_mbytes_per_sec": 0 00:08:10.984 }, 00:08:10.984 "claimed": false, 00:08:10.984 "zoned": false, 00:08:10.984 "supported_io_types": { 00:08:10.984 "read": true, 00:08:10.984 "write": true, 00:08:10.984 "unmap": true, 00:08:10.984 "flush": true, 00:08:10.984 "reset": true, 00:08:10.984 "nvme_admin": false, 00:08:10.984 "nvme_io": false, 00:08:10.984 "nvme_io_md": false, 00:08:10.984 "write_zeroes": true, 00:08:10.984 "zcopy": false, 00:08:10.984 "get_zone_info": false, 00:08:10.984 "zone_management": false, 00:08:10.984 "zone_append": false, 00:08:10.984 "compare": false, 00:08:10.984 "compare_and_write": false, 00:08:10.984 "abort": false, 00:08:10.984 "seek_hole": false, 00:08:10.984 "seek_data": false, 00:08:10.984 "copy": false, 00:08:10.984 "nvme_iov_md": false 00:08:10.984 }, 00:08:10.984 "memory_domains": [ 00:08:10.984 { 00:08:10.984 "dma_device_id": "system", 00:08:10.984 "dma_device_type": 1 00:08:10.984 }, 00:08:10.984 { 00:08:10.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.984 "dma_device_type": 2 00:08:10.984 }, 00:08:10.984 { 00:08:10.984 "dma_device_id": "system", 00:08:10.984 "dma_device_type": 1 00:08:10.984 }, 00:08:10.984 { 00:08:10.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.984 "dma_device_type": 2 00:08:10.984 } 00:08:10.984 ], 00:08:10.984 "driver_specific": { 00:08:10.984 "raid": { 00:08:10.984 "uuid": "f39a0839-2969-4ad4-9952-35116cf751f8", 00:08:10.984 "strip_size_kb": 64, 00:08:10.984 "state": "online", 00:08:10.984 "raid_level": "concat", 00:08:10.984 "superblock": true, 00:08:10.984 "num_base_bdevs": 2, 00:08:10.984 "num_base_bdevs_discovered": 2, 00:08:10.984 "num_base_bdevs_operational": 2, 00:08:10.984 "base_bdevs_list": [ 00:08:10.984 { 00:08:10.984 "name": "BaseBdev1", 00:08:10.984 "uuid": "e5dd2d4f-4435-45ba-8cd4-9b4cbb228b72", 00:08:10.984 "is_configured": true, 00:08:10.984 "data_offset": 2048, 00:08:10.984 "data_size": 63488 00:08:10.984 }, 00:08:10.984 { 00:08:10.984 "name": "BaseBdev2", 00:08:10.984 "uuid": "f62548e8-2cf0-4b6b-8c7b-1b9c542e39ff", 00:08:10.984 "is_configured": true, 00:08:10.984 "data_offset": 2048, 00:08:10.984 "data_size": 63488 00:08:10.984 } 00:08:10.984 ] 00:08:10.984 } 00:08:10.984 } 00:08:10.984 }' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:10.984 BaseBdev2' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.984 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.985 13:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.985 [2024-11-18 13:24:40.966841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.985 [2024-11-18 13:24:40.966888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.985 [2024-11-18 13:24:40.966950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.244 "name": "Existed_Raid", 00:08:11.244 "uuid": "f39a0839-2969-4ad4-9952-35116cf751f8", 00:08:11.244 "strip_size_kb": 64, 00:08:11.244 "state": "offline", 00:08:11.244 "raid_level": "concat", 00:08:11.244 "superblock": true, 00:08:11.244 "num_base_bdevs": 2, 00:08:11.244 "num_base_bdevs_discovered": 1, 00:08:11.244 "num_base_bdevs_operational": 1, 00:08:11.244 "base_bdevs_list": [ 00:08:11.244 { 00:08:11.244 "name": null, 00:08:11.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.244 "is_configured": false, 00:08:11.244 "data_offset": 0, 00:08:11.244 "data_size": 63488 00:08:11.244 }, 00:08:11.244 { 00:08:11.244 "name": "BaseBdev2", 00:08:11.244 "uuid": "f62548e8-2cf0-4b6b-8c7b-1b9c542e39ff", 00:08:11.244 "is_configured": true, 00:08:11.244 "data_offset": 2048, 00:08:11.244 "data_size": 63488 00:08:11.244 } 00:08:11.244 ] 00:08:11.244 }' 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.244 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.503 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.504 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.764 [2024-11-18 13:24:41.606054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.764 [2024-11-18 13:24:41.606156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61977 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61977 ']' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61977 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61977 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.764 killing process with pid 61977 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61977' 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61977 00:08:11.764 [2024-11-18 13:24:41.804880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.764 13:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61977 00:08:12.024 [2024-11-18 13:24:41.823851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.405 13:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.405 00:08:13.405 real 0m5.359s 00:08:13.405 user 0m7.602s 00:08:13.405 sys 0m1.005s 00:08:13.405 13:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.405 13:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.405 ************************************ 00:08:13.405 END TEST raid_state_function_test_sb 00:08:13.405 ************************************ 00:08:13.405 13:24:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:13.405 13:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:13.405 13:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.405 13:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.405 ************************************ 00:08:13.405 START TEST raid_superblock_test 00:08:13.405 ************************************ 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:13.405 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62229 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62229 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62229 ']' 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.406 13:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.406 [2024-11-18 13:24:43.239449] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:13.406 [2024-11-18 13:24:43.239575] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62229 ] 00:08:13.406 [2024-11-18 13:24:43.418705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.665 [2024-11-18 13:24:43.557260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.925 [2024-11-18 13:24:43.797953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.925 [2024-11-18 13:24:43.798038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 malloc1 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 [2024-11-18 13:24:44.167684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.186 [2024-11-18 13:24:44.167761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.186 [2024-11-18 13:24:44.167786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.186 [2024-11-18 13:24:44.167796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.186 [2024-11-18 13:24:44.170227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.186 [2024-11-18 13:24:44.170260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.186 pt1 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 malloc2 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 [2024-11-18 13:24:44.227175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.186 [2024-11-18 13:24:44.227234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.186 [2024-11-18 13:24:44.227258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:14.186 [2024-11-18 13:24:44.227268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.186 [2024-11-18 13:24:44.229698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.186 [2024-11-18 13:24:44.229734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.186 pt2 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.186 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 [2024-11-18 13:24:44.235254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.186 [2024-11-18 13:24:44.237357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.186 [2024-11-18 13:24:44.237545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:14.186 [2024-11-18 13:24:44.237562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.447 [2024-11-18 13:24:44.237814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.447 [2024-11-18 13:24:44.237977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:14.447 [2024-11-18 13:24:44.237994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:14.447 [2024-11-18 13:24:44.238178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.447 "name": "raid_bdev1", 00:08:14.447 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:14.447 "strip_size_kb": 64, 00:08:14.447 "state": "online", 00:08:14.447 "raid_level": "concat", 00:08:14.447 "superblock": true, 00:08:14.447 "num_base_bdevs": 2, 00:08:14.447 "num_base_bdevs_discovered": 2, 00:08:14.447 "num_base_bdevs_operational": 2, 00:08:14.447 "base_bdevs_list": [ 00:08:14.447 { 00:08:14.447 "name": "pt1", 00:08:14.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.447 "is_configured": true, 00:08:14.447 "data_offset": 2048, 00:08:14.447 "data_size": 63488 00:08:14.447 }, 00:08:14.447 { 00:08:14.447 "name": "pt2", 00:08:14.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.447 "is_configured": true, 00:08:14.447 "data_offset": 2048, 00:08:14.447 "data_size": 63488 00:08:14.447 } 00:08:14.447 ] 00:08:14.447 }' 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.447 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.708 [2024-11-18 13:24:44.674844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.708 "name": "raid_bdev1", 00:08:14.708 "aliases": [ 00:08:14.708 "de794a0e-9f5a-4e8c-9752-829b0ef95a73" 00:08:14.708 ], 00:08:14.708 "product_name": "Raid Volume", 00:08:14.708 "block_size": 512, 00:08:14.708 "num_blocks": 126976, 00:08:14.708 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:14.708 "assigned_rate_limits": { 00:08:14.708 "rw_ios_per_sec": 0, 00:08:14.708 "rw_mbytes_per_sec": 0, 00:08:14.708 "r_mbytes_per_sec": 0, 00:08:14.708 "w_mbytes_per_sec": 0 00:08:14.708 }, 00:08:14.708 "claimed": false, 00:08:14.708 "zoned": false, 00:08:14.708 "supported_io_types": { 00:08:14.708 "read": true, 00:08:14.708 "write": true, 00:08:14.708 "unmap": true, 00:08:14.708 "flush": true, 00:08:14.708 "reset": true, 00:08:14.708 "nvme_admin": false, 00:08:14.708 "nvme_io": false, 00:08:14.708 "nvme_io_md": false, 00:08:14.708 "write_zeroes": true, 00:08:14.708 "zcopy": false, 00:08:14.708 "get_zone_info": false, 00:08:14.708 "zone_management": false, 00:08:14.708 "zone_append": false, 00:08:14.708 "compare": false, 00:08:14.708 "compare_and_write": false, 00:08:14.708 "abort": false, 00:08:14.708 "seek_hole": false, 00:08:14.708 "seek_data": false, 00:08:14.708 "copy": false, 00:08:14.708 "nvme_iov_md": false 00:08:14.708 }, 00:08:14.708 "memory_domains": [ 00:08:14.708 { 00:08:14.708 "dma_device_id": "system", 00:08:14.708 "dma_device_type": 1 00:08:14.708 }, 00:08:14.708 { 00:08:14.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.708 "dma_device_type": 2 00:08:14.708 }, 00:08:14.708 { 00:08:14.708 "dma_device_id": "system", 00:08:14.708 "dma_device_type": 1 00:08:14.708 }, 00:08:14.708 { 00:08:14.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.708 "dma_device_type": 2 00:08:14.708 } 00:08:14.708 ], 00:08:14.708 "driver_specific": { 00:08:14.708 "raid": { 00:08:14.708 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:14.708 "strip_size_kb": 64, 00:08:14.708 "state": "online", 00:08:14.708 "raid_level": "concat", 00:08:14.708 "superblock": true, 00:08:14.708 "num_base_bdevs": 2, 00:08:14.708 "num_base_bdevs_discovered": 2, 00:08:14.708 "num_base_bdevs_operational": 2, 00:08:14.708 "base_bdevs_list": [ 00:08:14.708 { 00:08:14.708 "name": "pt1", 00:08:14.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.708 "is_configured": true, 00:08:14.708 "data_offset": 2048, 00:08:14.708 "data_size": 63488 00:08:14.708 }, 00:08:14.708 { 00:08:14.708 "name": "pt2", 00:08:14.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.708 "is_configured": true, 00:08:14.708 "data_offset": 2048, 00:08:14.708 "data_size": 63488 00:08:14.708 } 00:08:14.708 ] 00:08:14.708 } 00:08:14.708 } 00:08:14.708 }' 00:08:14.708 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:14.973 pt2' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 [2024-11-18 13:24:44.898465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de794a0e-9f5a-4e8c-9752-829b0ef95a73 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de794a0e-9f5a-4e8c-9752-829b0ef95a73 ']' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 [2024-11-18 13:24:44.942037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.973 [2024-11-18 13:24:44.942067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.973 [2024-11-18 13:24:44.942219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.973 [2024-11-18 13:24:44.942281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.973 [2024-11-18 13:24:44.942298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.973 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.974 13:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.974 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.974 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:14.974 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:14.974 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.974 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.234 [2024-11-18 13:24:45.057924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.234 [2024-11-18 13:24:45.059994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.234 [2024-11-18 13:24:45.060070] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.234 [2024-11-18 13:24:45.060138] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.234 [2024-11-18 13:24:45.060155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.234 [2024-11-18 13:24:45.060166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:15.234 request: 00:08:15.234 { 00:08:15.234 "name": "raid_bdev1", 00:08:15.234 "raid_level": "concat", 00:08:15.234 "base_bdevs": [ 00:08:15.234 "malloc1", 00:08:15.234 "malloc2" 00:08:15.234 ], 00:08:15.234 "strip_size_kb": 64, 00:08:15.234 "superblock": false, 00:08:15.234 "method": "bdev_raid_create", 00:08:15.234 "req_id": 1 00:08:15.234 } 00:08:15.234 Got JSON-RPC error response 00:08:15.234 response: 00:08:15.234 { 00:08:15.234 "code": -17, 00:08:15.234 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:15.234 } 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.234 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.234 [2024-11-18 13:24:45.125726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.234 [2024-11-18 13:24:45.125783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.234 [2024-11-18 13:24:45.125804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:15.234 [2024-11-18 13:24:45.125816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.234 [2024-11-18 13:24:45.128004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.234 [2024-11-18 13:24:45.128042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.234 [2024-11-18 13:24:45.128121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:15.234 [2024-11-18 13:24:45.128193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.235 pt1 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.235 "name": "raid_bdev1", 00:08:15.235 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:15.235 "strip_size_kb": 64, 00:08:15.235 "state": "configuring", 00:08:15.235 "raid_level": "concat", 00:08:15.235 "superblock": true, 00:08:15.235 "num_base_bdevs": 2, 00:08:15.235 "num_base_bdevs_discovered": 1, 00:08:15.235 "num_base_bdevs_operational": 2, 00:08:15.235 "base_bdevs_list": [ 00:08:15.235 { 00:08:15.235 "name": "pt1", 00:08:15.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.235 "is_configured": true, 00:08:15.235 "data_offset": 2048, 00:08:15.235 "data_size": 63488 00:08:15.235 }, 00:08:15.235 { 00:08:15.235 "name": null, 00:08:15.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.235 "is_configured": false, 00:08:15.235 "data_offset": 2048, 00:08:15.235 "data_size": 63488 00:08:15.235 } 00:08:15.235 ] 00:08:15.235 }' 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.235 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.806 [2024-11-18 13:24:45.600971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.806 [2024-11-18 13:24:45.601053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.806 [2024-11-18 13:24:45.601075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:15.806 [2024-11-18 13:24:45.601086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.806 [2024-11-18 13:24:45.601560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.806 [2024-11-18 13:24:45.601582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.806 [2024-11-18 13:24:45.601666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:15.806 [2024-11-18 13:24:45.601688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.806 [2024-11-18 13:24:45.601791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:15.806 [2024-11-18 13:24:45.601801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.806 [2024-11-18 13:24:45.602022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:15.806 [2024-11-18 13:24:45.602219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:15.806 [2024-11-18 13:24:45.602237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:15.806 [2024-11-18 13:24:45.602398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.806 pt2 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.806 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.806 "name": "raid_bdev1", 00:08:15.806 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:15.806 "strip_size_kb": 64, 00:08:15.806 "state": "online", 00:08:15.806 "raid_level": "concat", 00:08:15.806 "superblock": true, 00:08:15.806 "num_base_bdevs": 2, 00:08:15.806 "num_base_bdevs_discovered": 2, 00:08:15.806 "num_base_bdevs_operational": 2, 00:08:15.806 "base_bdevs_list": [ 00:08:15.806 { 00:08:15.806 "name": "pt1", 00:08:15.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.806 "is_configured": true, 00:08:15.806 "data_offset": 2048, 00:08:15.806 "data_size": 63488 00:08:15.807 }, 00:08:15.807 { 00:08:15.807 "name": "pt2", 00:08:15.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.807 "is_configured": true, 00:08:15.807 "data_offset": 2048, 00:08:15.807 "data_size": 63488 00:08:15.807 } 00:08:15.807 ] 00:08:15.807 }' 00:08:15.807 13:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.807 13:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.066 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.067 [2024-11-18 13:24:46.060415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.067 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.067 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.067 "name": "raid_bdev1", 00:08:16.067 "aliases": [ 00:08:16.067 "de794a0e-9f5a-4e8c-9752-829b0ef95a73" 00:08:16.067 ], 00:08:16.067 "product_name": "Raid Volume", 00:08:16.067 "block_size": 512, 00:08:16.067 "num_blocks": 126976, 00:08:16.067 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:16.067 "assigned_rate_limits": { 00:08:16.067 "rw_ios_per_sec": 0, 00:08:16.067 "rw_mbytes_per_sec": 0, 00:08:16.067 "r_mbytes_per_sec": 0, 00:08:16.067 "w_mbytes_per_sec": 0 00:08:16.067 }, 00:08:16.067 "claimed": false, 00:08:16.067 "zoned": false, 00:08:16.067 "supported_io_types": { 00:08:16.067 "read": true, 00:08:16.067 "write": true, 00:08:16.067 "unmap": true, 00:08:16.067 "flush": true, 00:08:16.067 "reset": true, 00:08:16.067 "nvme_admin": false, 00:08:16.067 "nvme_io": false, 00:08:16.067 "nvme_io_md": false, 00:08:16.067 "write_zeroes": true, 00:08:16.067 "zcopy": false, 00:08:16.067 "get_zone_info": false, 00:08:16.067 "zone_management": false, 00:08:16.067 "zone_append": false, 00:08:16.067 "compare": false, 00:08:16.067 "compare_and_write": false, 00:08:16.067 "abort": false, 00:08:16.067 "seek_hole": false, 00:08:16.067 "seek_data": false, 00:08:16.067 "copy": false, 00:08:16.067 "nvme_iov_md": false 00:08:16.067 }, 00:08:16.067 "memory_domains": [ 00:08:16.067 { 00:08:16.067 "dma_device_id": "system", 00:08:16.067 "dma_device_type": 1 00:08:16.067 }, 00:08:16.067 { 00:08:16.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.067 "dma_device_type": 2 00:08:16.067 }, 00:08:16.067 { 00:08:16.067 "dma_device_id": "system", 00:08:16.067 "dma_device_type": 1 00:08:16.067 }, 00:08:16.067 { 00:08:16.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.067 "dma_device_type": 2 00:08:16.067 } 00:08:16.067 ], 00:08:16.067 "driver_specific": { 00:08:16.067 "raid": { 00:08:16.067 "uuid": "de794a0e-9f5a-4e8c-9752-829b0ef95a73", 00:08:16.067 "strip_size_kb": 64, 00:08:16.067 "state": "online", 00:08:16.067 "raid_level": "concat", 00:08:16.067 "superblock": true, 00:08:16.067 "num_base_bdevs": 2, 00:08:16.067 "num_base_bdevs_discovered": 2, 00:08:16.067 "num_base_bdevs_operational": 2, 00:08:16.067 "base_bdevs_list": [ 00:08:16.067 { 00:08:16.067 "name": "pt1", 00:08:16.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.067 "is_configured": true, 00:08:16.067 "data_offset": 2048, 00:08:16.067 "data_size": 63488 00:08:16.067 }, 00:08:16.067 { 00:08:16.067 "name": "pt2", 00:08:16.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.067 "is_configured": true, 00:08:16.067 "data_offset": 2048, 00:08:16.067 "data_size": 63488 00:08:16.067 } 00:08:16.067 ] 00:08:16.067 } 00:08:16.067 } 00:08:16.067 }' 00:08:16.067 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:16.327 pt2' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:16.327 [2024-11-18 13:24:46.311937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de794a0e-9f5a-4e8c-9752-829b0ef95a73 '!=' de794a0e-9f5a-4e8c-9752-829b0ef95a73 ']' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62229 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62229 ']' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62229 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.327 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62229 00:08:16.586 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.586 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.586 killing process with pid 62229 00:08:16.586 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62229' 00:08:16.586 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62229 00:08:16.586 [2024-11-18 13:24:46.396910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.586 [2024-11-18 13:24:46.397029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.586 [2024-11-18 13:24:46.397086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.586 [2024-11-18 13:24:46.397099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:16.586 13:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62229 00:08:16.586 [2024-11-18 13:24:46.602681] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.964 13:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.964 00:08:17.964 real 0m4.562s 00:08:17.964 user 0m6.314s 00:08:17.964 sys 0m0.905s 00:08:17.965 13:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.965 13:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.965 ************************************ 00:08:17.965 END TEST raid_superblock_test 00:08:17.965 ************************************ 00:08:17.965 13:24:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:17.965 13:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.965 13:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.965 13:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.965 ************************************ 00:08:17.965 START TEST raid_read_error_test 00:08:17.965 ************************************ 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2V43Bc56iS 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62446 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62446 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62446 ']' 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.965 13:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.965 [2024-11-18 13:24:47.883018] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:17.965 [2024-11-18 13:24:47.883140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62446 ] 00:08:18.224 [2024-11-18 13:24:48.058913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.224 [2024-11-18 13:24:48.174802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.483 [2024-11-18 13:24:48.378042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.483 [2024-11-18 13:24:48.378095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.741 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.741 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.741 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.741 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.742 BaseBdev1_malloc 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.742 true 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.742 [2024-11-18 13:24:48.786625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.742 [2024-11-18 13:24:48.786681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.742 [2024-11-18 13:24:48.786702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.742 [2024-11-18 13:24:48.786713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.742 [2024-11-18 13:24:48.788764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.742 [2024-11-18 13:24:48.788806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.742 BaseBdev1 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.742 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.001 BaseBdev2_malloc 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.001 true 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.001 [2024-11-18 13:24:48.844829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.001 [2024-11-18 13:24:48.844886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.001 [2024-11-18 13:24:48.844903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.001 [2024-11-18 13:24:48.844915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.001 [2024-11-18 13:24:48.846988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.001 [2024-11-18 13:24:48.847028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.001 BaseBdev2 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.001 [2024-11-18 13:24:48.852880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.001 [2024-11-18 13:24:48.854687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.001 [2024-11-18 13:24:48.854934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:19.001 [2024-11-18 13:24:48.854962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:19.001 [2024-11-18 13:24:48.855291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:19.001 [2024-11-18 13:24:48.855537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:19.001 [2024-11-18 13:24:48.855560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:19.001 [2024-11-18 13:24:48.855726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.001 "name": "raid_bdev1", 00:08:19.001 "uuid": "a8c34f36-89b5-44b9-b747-f17777ab4a14", 00:08:19.001 "strip_size_kb": 64, 00:08:19.001 "state": "online", 00:08:19.001 "raid_level": "concat", 00:08:19.001 "superblock": true, 00:08:19.001 "num_base_bdevs": 2, 00:08:19.001 "num_base_bdevs_discovered": 2, 00:08:19.001 "num_base_bdevs_operational": 2, 00:08:19.001 "base_bdevs_list": [ 00:08:19.001 { 00:08:19.001 "name": "BaseBdev1", 00:08:19.001 "uuid": "4197c5d3-5f95-55bb-a9e0-487255e978d9", 00:08:19.001 "is_configured": true, 00:08:19.001 "data_offset": 2048, 00:08:19.001 "data_size": 63488 00:08:19.001 }, 00:08:19.001 { 00:08:19.001 "name": "BaseBdev2", 00:08:19.001 "uuid": "ee4d3a56-954e-5f7c-a713-27203af11e13", 00:08:19.001 "is_configured": true, 00:08:19.001 "data_offset": 2048, 00:08:19.001 "data_size": 63488 00:08:19.001 } 00:08:19.001 ] 00:08:19.001 }' 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.001 13:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.265 13:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.265 13:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.526 [2024-11-18 13:24:49.397271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.463 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.463 "name": "raid_bdev1", 00:08:20.463 "uuid": "a8c34f36-89b5-44b9-b747-f17777ab4a14", 00:08:20.464 "strip_size_kb": 64, 00:08:20.464 "state": "online", 00:08:20.464 "raid_level": "concat", 00:08:20.464 "superblock": true, 00:08:20.464 "num_base_bdevs": 2, 00:08:20.464 "num_base_bdevs_discovered": 2, 00:08:20.464 "num_base_bdevs_operational": 2, 00:08:20.464 "base_bdevs_list": [ 00:08:20.464 { 00:08:20.464 "name": "BaseBdev1", 00:08:20.464 "uuid": "4197c5d3-5f95-55bb-a9e0-487255e978d9", 00:08:20.464 "is_configured": true, 00:08:20.464 "data_offset": 2048, 00:08:20.464 "data_size": 63488 00:08:20.464 }, 00:08:20.464 { 00:08:20.464 "name": "BaseBdev2", 00:08:20.464 "uuid": "ee4d3a56-954e-5f7c-a713-27203af11e13", 00:08:20.464 "is_configured": true, 00:08:20.464 "data_offset": 2048, 00:08:20.464 "data_size": 63488 00:08:20.464 } 00:08:20.464 ] 00:08:20.464 }' 00:08:20.464 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.464 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.724 [2024-11-18 13:24:50.751442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.724 [2024-11-18 13:24:50.751488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.724 [2024-11-18 13:24:50.754214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.724 [2024-11-18 13:24:50.754265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.724 [2024-11-18 13:24:50.754300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.724 [2024-11-18 13:24:50.754323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.724 { 00:08:20.724 "results": [ 00:08:20.724 { 00:08:20.724 "job": "raid_bdev1", 00:08:20.724 "core_mask": "0x1", 00:08:20.724 "workload": "randrw", 00:08:20.724 "percentage": 50, 00:08:20.724 "status": "finished", 00:08:20.724 "queue_depth": 1, 00:08:20.724 "io_size": 131072, 00:08:20.724 "runtime": 1.354973, 00:08:20.724 "iops": 16108.808072190368, 00:08:20.724 "mibps": 2013.601009023796, 00:08:20.724 "io_failed": 1, 00:08:20.724 "io_timeout": 0, 00:08:20.724 "avg_latency_us": 86.21846096476382, 00:08:20.724 "min_latency_us": 25.6, 00:08:20.724 "max_latency_us": 1466.6899563318777 00:08:20.724 } 00:08:20.724 ], 00:08:20.724 "core_count": 1 00:08:20.724 } 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62446 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62446 ']' 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62446 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.724 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62446 00:08:20.983 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.983 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.983 killing process with pid 62446 00:08:20.983 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62446' 00:08:20.983 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62446 00:08:20.983 [2024-11-18 13:24:50.798241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.983 13:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62446 00:08:20.983 [2024-11-18 13:24:50.932082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2V43Bc56iS 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:22.364 00:08:22.364 real 0m4.338s 00:08:22.364 user 0m5.162s 00:08:22.364 sys 0m0.602s 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.364 13:24:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 ************************************ 00:08:22.364 END TEST raid_read_error_test 00:08:22.364 ************************************ 00:08:22.364 13:24:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:22.364 13:24:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.364 13:24:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.364 13:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 ************************************ 00:08:22.364 START TEST raid_write_error_test 00:08:22.364 ************************************ 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mEzlynhAvz 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62586 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62586 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62586 ']' 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.364 13:24:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 [2024-11-18 13:24:52.302091] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:22.364 [2024-11-18 13:24:52.302246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62586 ] 00:08:22.624 [2024-11-18 13:24:52.466850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.624 [2024-11-18 13:24:52.582890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.883 [2024-11-18 13:24:52.786665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.883 [2024-11-18 13:24:52.786709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.144 BaseBdev1_malloc 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.144 true 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.144 [2024-11-18 13:24:53.187947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:23.144 [2024-11-18 13:24:53.188005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.144 [2024-11-18 13:24:53.188026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:23.144 [2024-11-18 13:24:53.188037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.144 [2024-11-18 13:24:53.190059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.144 [2024-11-18 13:24:53.190111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:23.144 BaseBdev1 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.144 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.405 BaseBdev2_malloc 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.405 true 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.405 [2024-11-18 13:24:53.254596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:23.405 [2024-11-18 13:24:53.254650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.405 [2024-11-18 13:24:53.254670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:23.405 [2024-11-18 13:24:53.254683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.405 [2024-11-18 13:24:53.256974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.405 [2024-11-18 13:24:53.257017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:23.405 BaseBdev2 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.405 [2024-11-18 13:24:53.266642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.405 [2024-11-18 13:24:53.268459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.405 [2024-11-18 13:24:53.268656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.405 [2024-11-18 13:24:53.268678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:23.405 [2024-11-18 13:24:53.268905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:23.405 [2024-11-18 13:24:53.269092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.405 [2024-11-18 13:24:53.269112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.405 [2024-11-18 13:24:53.269271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.405 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.406 "name": "raid_bdev1", 00:08:23.406 "uuid": "c4baead6-1e4c-425a-84b0-1e87ec01834b", 00:08:23.406 "strip_size_kb": 64, 00:08:23.406 "state": "online", 00:08:23.406 "raid_level": "concat", 00:08:23.406 "superblock": true, 00:08:23.406 "num_base_bdevs": 2, 00:08:23.406 "num_base_bdevs_discovered": 2, 00:08:23.406 "num_base_bdevs_operational": 2, 00:08:23.406 "base_bdevs_list": [ 00:08:23.406 { 00:08:23.406 "name": "BaseBdev1", 00:08:23.406 "uuid": "cf89dfaf-1eaf-51f9-b6e1-39660efd5758", 00:08:23.406 "is_configured": true, 00:08:23.406 "data_offset": 2048, 00:08:23.406 "data_size": 63488 00:08:23.406 }, 00:08:23.406 { 00:08:23.406 "name": "BaseBdev2", 00:08:23.406 "uuid": "a404bea6-04ef-572c-ac0f-f92665d43fe5", 00:08:23.406 "is_configured": true, 00:08:23.406 "data_offset": 2048, 00:08:23.406 "data_size": 63488 00:08:23.406 } 00:08:23.406 ] 00:08:23.406 }' 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.406 13:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.976 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.976 13:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.976 [2024-11-18 13:24:53.823032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.915 "name": "raid_bdev1", 00:08:24.915 "uuid": "c4baead6-1e4c-425a-84b0-1e87ec01834b", 00:08:24.915 "strip_size_kb": 64, 00:08:24.915 "state": "online", 00:08:24.915 "raid_level": "concat", 00:08:24.915 "superblock": true, 00:08:24.915 "num_base_bdevs": 2, 00:08:24.915 "num_base_bdevs_discovered": 2, 00:08:24.915 "num_base_bdevs_operational": 2, 00:08:24.915 "base_bdevs_list": [ 00:08:24.915 { 00:08:24.915 "name": "BaseBdev1", 00:08:24.915 "uuid": "cf89dfaf-1eaf-51f9-b6e1-39660efd5758", 00:08:24.915 "is_configured": true, 00:08:24.915 "data_offset": 2048, 00:08:24.915 "data_size": 63488 00:08:24.915 }, 00:08:24.915 { 00:08:24.915 "name": "BaseBdev2", 00:08:24.915 "uuid": "a404bea6-04ef-572c-ac0f-f92665d43fe5", 00:08:24.915 "is_configured": true, 00:08:24.915 "data_offset": 2048, 00:08:24.915 "data_size": 63488 00:08:24.915 } 00:08:24.915 ] 00:08:24.915 }' 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.915 13:24:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.485 [2024-11-18 13:24:55.239243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:25.485 [2024-11-18 13:24:55.239297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.485 [2024-11-18 13:24:55.241991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.485 [2024-11-18 13:24:55.242040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.485 [2024-11-18 13:24:55.242073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.485 [2024-11-18 13:24:55.242089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:25.485 { 00:08:25.485 "results": [ 00:08:25.485 { 00:08:25.485 "job": "raid_bdev1", 00:08:25.485 "core_mask": "0x1", 00:08:25.485 "workload": "randrw", 00:08:25.485 "percentage": 50, 00:08:25.485 "status": "finished", 00:08:25.485 "queue_depth": 1, 00:08:25.485 "io_size": 131072, 00:08:25.485 "runtime": 1.417287, 00:08:25.485 "iops": 16037.683263869632, 00:08:25.485 "mibps": 2004.710407983704, 00:08:25.485 "io_failed": 1, 00:08:25.485 "io_timeout": 0, 00:08:25.485 "avg_latency_us": 86.49745881151473, 00:08:25.485 "min_latency_us": 25.823580786026202, 00:08:25.485 "max_latency_us": 1709.9458515283843 00:08:25.485 } 00:08:25.485 ], 00:08:25.485 "core_count": 1 00:08:25.485 } 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62586 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62586 ']' 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62586 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62586 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.485 killing process with pid 62586 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62586' 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62586 00:08:25.485 [2024-11-18 13:24:55.289068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.485 13:24:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62586 00:08:25.485 [2024-11-18 13:24:55.423559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mEzlynhAvz 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:26.867 00:08:26.867 real 0m4.425s 00:08:26.867 user 0m5.306s 00:08:26.867 sys 0m0.557s 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.867 13:24:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 ************************************ 00:08:26.867 END TEST raid_write_error_test 00:08:26.867 ************************************ 00:08:26.867 13:24:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:26.867 13:24:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:26.867 13:24:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.867 13:24:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.867 13:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 ************************************ 00:08:26.867 START TEST raid_state_function_test 00:08:26.867 ************************************ 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62724 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.867 Process raid pid: 62724 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62724' 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62724 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62724 ']' 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.867 13:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.867 [2024-11-18 13:24:56.781396] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:26.867 [2024-11-18 13:24:56.781540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.127 [2024-11-18 13:24:56.952185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.127 [2024-11-18 13:24:57.072384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.386 [2024-11-18 13:24:57.291253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.386 [2024-11-18 13:24:57.291300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.647 [2024-11-18 13:24:57.607499] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.647 [2024-11-18 13:24:57.607560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.647 [2024-11-18 13:24:57.607571] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.647 [2024-11-18 13:24:57.607581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.647 "name": "Existed_Raid", 00:08:27.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.647 "strip_size_kb": 0, 00:08:27.647 "state": "configuring", 00:08:27.647 "raid_level": "raid1", 00:08:27.647 "superblock": false, 00:08:27.647 "num_base_bdevs": 2, 00:08:27.647 "num_base_bdevs_discovered": 0, 00:08:27.647 "num_base_bdevs_operational": 2, 00:08:27.647 "base_bdevs_list": [ 00:08:27.647 { 00:08:27.647 "name": "BaseBdev1", 00:08:27.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.647 "is_configured": false, 00:08:27.647 "data_offset": 0, 00:08:27.647 "data_size": 0 00:08:27.647 }, 00:08:27.647 { 00:08:27.647 "name": "BaseBdev2", 00:08:27.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.647 "is_configured": false, 00:08:27.647 "data_offset": 0, 00:08:27.647 "data_size": 0 00:08:27.647 } 00:08:27.647 ] 00:08:27.647 }' 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.647 13:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 [2024-11-18 13:24:58.058710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.215 [2024-11-18 13:24:58.058757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.215 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 [2024-11-18 13:24:58.066654] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.216 [2024-11-18 13:24:58.066697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.216 [2024-11-18 13:24:58.066707] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.216 [2024-11-18 13:24:58.066719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 [2024-11-18 13:24:58.110685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.216 BaseBdev1 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 [ 00:08:28.216 { 00:08:28.216 "name": "BaseBdev1", 00:08:28.216 "aliases": [ 00:08:28.216 "213c432f-de5c-4c7e-a71d-90bc3168d91f" 00:08:28.216 ], 00:08:28.216 "product_name": "Malloc disk", 00:08:28.216 "block_size": 512, 00:08:28.216 "num_blocks": 65536, 00:08:28.216 "uuid": "213c432f-de5c-4c7e-a71d-90bc3168d91f", 00:08:28.216 "assigned_rate_limits": { 00:08:28.216 "rw_ios_per_sec": 0, 00:08:28.216 "rw_mbytes_per_sec": 0, 00:08:28.216 "r_mbytes_per_sec": 0, 00:08:28.216 "w_mbytes_per_sec": 0 00:08:28.216 }, 00:08:28.216 "claimed": true, 00:08:28.216 "claim_type": "exclusive_write", 00:08:28.216 "zoned": false, 00:08:28.216 "supported_io_types": { 00:08:28.216 "read": true, 00:08:28.216 "write": true, 00:08:28.216 "unmap": true, 00:08:28.216 "flush": true, 00:08:28.216 "reset": true, 00:08:28.216 "nvme_admin": false, 00:08:28.216 "nvme_io": false, 00:08:28.216 "nvme_io_md": false, 00:08:28.216 "write_zeroes": true, 00:08:28.216 "zcopy": true, 00:08:28.216 "get_zone_info": false, 00:08:28.216 "zone_management": false, 00:08:28.216 "zone_append": false, 00:08:28.216 "compare": false, 00:08:28.216 "compare_and_write": false, 00:08:28.216 "abort": true, 00:08:28.216 "seek_hole": false, 00:08:28.216 "seek_data": false, 00:08:28.216 "copy": true, 00:08:28.216 "nvme_iov_md": false 00:08:28.216 }, 00:08:28.216 "memory_domains": [ 00:08:28.216 { 00:08:28.216 "dma_device_id": "system", 00:08:28.216 "dma_device_type": 1 00:08:28.216 }, 00:08:28.216 { 00:08:28.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.216 "dma_device_type": 2 00:08:28.216 } 00:08:28.216 ], 00:08:28.216 "driver_specific": {} 00:08:28.216 } 00:08:28.216 ] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.216 "name": "Existed_Raid", 00:08:28.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.216 "strip_size_kb": 0, 00:08:28.216 "state": "configuring", 00:08:28.216 "raid_level": "raid1", 00:08:28.216 "superblock": false, 00:08:28.216 "num_base_bdevs": 2, 00:08:28.216 "num_base_bdevs_discovered": 1, 00:08:28.216 "num_base_bdevs_operational": 2, 00:08:28.216 "base_bdevs_list": [ 00:08:28.216 { 00:08:28.216 "name": "BaseBdev1", 00:08:28.216 "uuid": "213c432f-de5c-4c7e-a71d-90bc3168d91f", 00:08:28.216 "is_configured": true, 00:08:28.216 "data_offset": 0, 00:08:28.216 "data_size": 65536 00:08:28.216 }, 00:08:28.216 { 00:08:28.216 "name": "BaseBdev2", 00:08:28.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.216 "is_configured": false, 00:08:28.216 "data_offset": 0, 00:08:28.216 "data_size": 0 00:08:28.216 } 00:08:28.216 ] 00:08:28.216 }' 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.216 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 [2024-11-18 13:24:58.617971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.785 [2024-11-18 13:24:58.618034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 [2024-11-18 13:24:58.629974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.785 [2024-11-18 13:24:58.631816] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.785 [2024-11-18 13:24:58.631864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.785 "name": "Existed_Raid", 00:08:28.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.785 "strip_size_kb": 0, 00:08:28.785 "state": "configuring", 00:08:28.785 "raid_level": "raid1", 00:08:28.785 "superblock": false, 00:08:28.785 "num_base_bdevs": 2, 00:08:28.785 "num_base_bdevs_discovered": 1, 00:08:28.785 "num_base_bdevs_operational": 2, 00:08:28.785 "base_bdevs_list": [ 00:08:28.785 { 00:08:28.785 "name": "BaseBdev1", 00:08:28.785 "uuid": "213c432f-de5c-4c7e-a71d-90bc3168d91f", 00:08:28.785 "is_configured": true, 00:08:28.785 "data_offset": 0, 00:08:28.785 "data_size": 65536 00:08:28.785 }, 00:08:28.785 { 00:08:28.785 "name": "BaseBdev2", 00:08:28.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.785 "is_configured": false, 00:08:28.785 "data_offset": 0, 00:08:28.785 "data_size": 0 00:08:28.785 } 00:08:28.785 ] 00:08:28.785 }' 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.785 13:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 [2024-11-18 13:24:59.146655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.356 [2024-11-18 13:24:59.146708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:29.356 [2024-11-18 13:24:59.146717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:29.356 [2024-11-18 13:24:59.146979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:29.356 [2024-11-18 13:24:59.147177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:29.356 [2024-11-18 13:24:59.147193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:29.356 [2024-11-18 13:24:59.147449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.356 BaseBdev2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 [ 00:08:29.356 { 00:08:29.356 "name": "BaseBdev2", 00:08:29.356 "aliases": [ 00:08:29.356 "e39f37b8-16bf-413f-a9b7-338a5635f228" 00:08:29.356 ], 00:08:29.356 "product_name": "Malloc disk", 00:08:29.356 "block_size": 512, 00:08:29.356 "num_blocks": 65536, 00:08:29.356 "uuid": "e39f37b8-16bf-413f-a9b7-338a5635f228", 00:08:29.356 "assigned_rate_limits": { 00:08:29.356 "rw_ios_per_sec": 0, 00:08:29.356 "rw_mbytes_per_sec": 0, 00:08:29.356 "r_mbytes_per_sec": 0, 00:08:29.356 "w_mbytes_per_sec": 0 00:08:29.356 }, 00:08:29.356 "claimed": true, 00:08:29.356 "claim_type": "exclusive_write", 00:08:29.356 "zoned": false, 00:08:29.356 "supported_io_types": { 00:08:29.356 "read": true, 00:08:29.356 "write": true, 00:08:29.356 "unmap": true, 00:08:29.356 "flush": true, 00:08:29.356 "reset": true, 00:08:29.356 "nvme_admin": false, 00:08:29.356 "nvme_io": false, 00:08:29.356 "nvme_io_md": false, 00:08:29.356 "write_zeroes": true, 00:08:29.356 "zcopy": true, 00:08:29.356 "get_zone_info": false, 00:08:29.356 "zone_management": false, 00:08:29.356 "zone_append": false, 00:08:29.356 "compare": false, 00:08:29.356 "compare_and_write": false, 00:08:29.356 "abort": true, 00:08:29.356 "seek_hole": false, 00:08:29.356 "seek_data": false, 00:08:29.356 "copy": true, 00:08:29.356 "nvme_iov_md": false 00:08:29.356 }, 00:08:29.356 "memory_domains": [ 00:08:29.356 { 00:08:29.356 "dma_device_id": "system", 00:08:29.356 "dma_device_type": 1 00:08:29.356 }, 00:08:29.356 { 00:08:29.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.356 "dma_device_type": 2 00:08:29.356 } 00:08:29.356 ], 00:08:29.356 "driver_specific": {} 00:08:29.356 } 00:08:29.356 ] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.356 "name": "Existed_Raid", 00:08:29.356 "uuid": "9341cd46-c2d1-4c9f-bcab-39a2f92fdc1d", 00:08:29.356 "strip_size_kb": 0, 00:08:29.356 "state": "online", 00:08:29.356 "raid_level": "raid1", 00:08:29.356 "superblock": false, 00:08:29.356 "num_base_bdevs": 2, 00:08:29.356 "num_base_bdevs_discovered": 2, 00:08:29.356 "num_base_bdevs_operational": 2, 00:08:29.356 "base_bdevs_list": [ 00:08:29.356 { 00:08:29.356 "name": "BaseBdev1", 00:08:29.356 "uuid": "213c432f-de5c-4c7e-a71d-90bc3168d91f", 00:08:29.356 "is_configured": true, 00:08:29.356 "data_offset": 0, 00:08:29.356 "data_size": 65536 00:08:29.356 }, 00:08:29.356 { 00:08:29.356 "name": "BaseBdev2", 00:08:29.356 "uuid": "e39f37b8-16bf-413f-a9b7-338a5635f228", 00:08:29.356 "is_configured": true, 00:08:29.356 "data_offset": 0, 00:08:29.356 "data_size": 65536 00:08:29.356 } 00:08:29.356 ] 00:08:29.356 }' 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.356 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.617 [2024-11-18 13:24:59.590542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.617 "name": "Existed_Raid", 00:08:29.617 "aliases": [ 00:08:29.617 "9341cd46-c2d1-4c9f-bcab-39a2f92fdc1d" 00:08:29.617 ], 00:08:29.617 "product_name": "Raid Volume", 00:08:29.617 "block_size": 512, 00:08:29.617 "num_blocks": 65536, 00:08:29.617 "uuid": "9341cd46-c2d1-4c9f-bcab-39a2f92fdc1d", 00:08:29.617 "assigned_rate_limits": { 00:08:29.617 "rw_ios_per_sec": 0, 00:08:29.617 "rw_mbytes_per_sec": 0, 00:08:29.617 "r_mbytes_per_sec": 0, 00:08:29.617 "w_mbytes_per_sec": 0 00:08:29.617 }, 00:08:29.617 "claimed": false, 00:08:29.617 "zoned": false, 00:08:29.617 "supported_io_types": { 00:08:29.617 "read": true, 00:08:29.617 "write": true, 00:08:29.617 "unmap": false, 00:08:29.617 "flush": false, 00:08:29.617 "reset": true, 00:08:29.617 "nvme_admin": false, 00:08:29.617 "nvme_io": false, 00:08:29.617 "nvme_io_md": false, 00:08:29.617 "write_zeroes": true, 00:08:29.617 "zcopy": false, 00:08:29.617 "get_zone_info": false, 00:08:29.617 "zone_management": false, 00:08:29.617 "zone_append": false, 00:08:29.617 "compare": false, 00:08:29.617 "compare_and_write": false, 00:08:29.617 "abort": false, 00:08:29.617 "seek_hole": false, 00:08:29.617 "seek_data": false, 00:08:29.617 "copy": false, 00:08:29.617 "nvme_iov_md": false 00:08:29.617 }, 00:08:29.617 "memory_domains": [ 00:08:29.617 { 00:08:29.617 "dma_device_id": "system", 00:08:29.617 "dma_device_type": 1 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.617 "dma_device_type": 2 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "dma_device_id": "system", 00:08:29.617 "dma_device_type": 1 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.617 "dma_device_type": 2 00:08:29.617 } 00:08:29.617 ], 00:08:29.617 "driver_specific": { 00:08:29.617 "raid": { 00:08:29.617 "uuid": "9341cd46-c2d1-4c9f-bcab-39a2f92fdc1d", 00:08:29.617 "strip_size_kb": 0, 00:08:29.617 "state": "online", 00:08:29.617 "raid_level": "raid1", 00:08:29.617 "superblock": false, 00:08:29.617 "num_base_bdevs": 2, 00:08:29.617 "num_base_bdevs_discovered": 2, 00:08:29.617 "num_base_bdevs_operational": 2, 00:08:29.617 "base_bdevs_list": [ 00:08:29.617 { 00:08:29.617 "name": "BaseBdev1", 00:08:29.617 "uuid": "213c432f-de5c-4c7e-a71d-90bc3168d91f", 00:08:29.617 "is_configured": true, 00:08:29.617 "data_offset": 0, 00:08:29.617 "data_size": 65536 00:08:29.617 }, 00:08:29.617 { 00:08:29.617 "name": "BaseBdev2", 00:08:29.617 "uuid": "e39f37b8-16bf-413f-a9b7-338a5635f228", 00:08:29.617 "is_configured": true, 00:08:29.617 "data_offset": 0, 00:08:29.617 "data_size": 65536 00:08:29.617 } 00:08:29.617 ] 00:08:29.617 } 00:08:29.617 } 00:08:29.617 }' 00:08:29.617 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.877 BaseBdev2' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.877 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.877 [2024-11-18 13:24:59.834311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.138 "name": "Existed_Raid", 00:08:30.138 "uuid": "9341cd46-c2d1-4c9f-bcab-39a2f92fdc1d", 00:08:30.138 "strip_size_kb": 0, 00:08:30.138 "state": "online", 00:08:30.138 "raid_level": "raid1", 00:08:30.138 "superblock": false, 00:08:30.138 "num_base_bdevs": 2, 00:08:30.138 "num_base_bdevs_discovered": 1, 00:08:30.138 "num_base_bdevs_operational": 1, 00:08:30.138 "base_bdevs_list": [ 00:08:30.138 { 00:08:30.138 "name": null, 00:08:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.138 "is_configured": false, 00:08:30.138 "data_offset": 0, 00:08:30.138 "data_size": 65536 00:08:30.138 }, 00:08:30.138 { 00:08:30.138 "name": "BaseBdev2", 00:08:30.138 "uuid": "e39f37b8-16bf-413f-a9b7-338a5635f228", 00:08:30.138 "is_configured": true, 00:08:30.138 "data_offset": 0, 00:08:30.138 "data_size": 65536 00:08:30.138 } 00:08:30.138 ] 00:08:30.138 }' 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.138 13:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:30.403 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.670 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:30.670 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.670 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:30.670 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.670 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.670 [2024-11-18 13:25:00.458345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.671 [2024-11-18 13:25:00.458449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.671 [2024-11-18 13:25:00.556829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.671 [2024-11-18 13:25:00.556881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.671 [2024-11-18 13:25:00.556909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62724 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62724 ']' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62724 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62724 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.671 killing process with pid 62724 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62724' 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62724 00:08:30.671 [2024-11-18 13:25:00.654176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.671 13:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62724 00:08:30.671 [2024-11-18 13:25:00.671164] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.052 00:08:32.052 real 0m5.104s 00:08:32.052 user 0m7.400s 00:08:32.052 sys 0m0.857s 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 ************************************ 00:08:32.052 END TEST raid_state_function_test 00:08:32.052 ************************************ 00:08:32.052 13:25:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:32.052 13:25:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:32.052 13:25:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.052 13:25:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.052 ************************************ 00:08:32.052 START TEST raid_state_function_test_sb 00:08:32.052 ************************************ 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.052 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62977 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.053 Process raid pid: 62977 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62977' 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62977 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62977 ']' 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.053 13:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.053 [2024-11-18 13:25:01.959996] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:32.053 [2024-11-18 13:25:01.960135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.313 [2024-11-18 13:25:02.140794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.313 [2024-11-18 13:25:02.253024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.572 [2024-11-18 13:25:02.459694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.572 [2024-11-18 13:25:02.459743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 [2024-11-18 13:25:02.809499] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.832 [2024-11-18 13:25:02.809554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.832 [2024-11-18 13:25:02.809565] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.832 [2024-11-18 13:25:02.809575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.832 "name": "Existed_Raid", 00:08:32.832 "uuid": "ce93454f-14dc-4263-b6a0-ed8c05cf8820", 00:08:32.832 "strip_size_kb": 0, 00:08:32.832 "state": "configuring", 00:08:32.832 "raid_level": "raid1", 00:08:32.832 "superblock": true, 00:08:32.832 "num_base_bdevs": 2, 00:08:32.832 "num_base_bdevs_discovered": 0, 00:08:32.832 "num_base_bdevs_operational": 2, 00:08:32.832 "base_bdevs_list": [ 00:08:32.832 { 00:08:32.832 "name": "BaseBdev1", 00:08:32.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.832 "is_configured": false, 00:08:32.832 "data_offset": 0, 00:08:32.832 "data_size": 0 00:08:32.832 }, 00:08:32.832 { 00:08:32.832 "name": "BaseBdev2", 00:08:32.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.832 "is_configured": false, 00:08:32.832 "data_offset": 0, 00:08:32.832 "data_size": 0 00:08:32.832 } 00:08:32.832 ] 00:08:32.832 }' 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.832 13:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 [2024-11-18 13:25:03.280677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.402 [2024-11-18 13:25:03.280728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 [2024-11-18 13:25:03.292618] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.402 [2024-11-18 13:25:03.292658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.402 [2024-11-18 13:25:03.292668] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.402 [2024-11-18 13:25:03.292680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 [2024-11-18 13:25:03.340044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.402 BaseBdev1 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.402 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.402 [ 00:08:33.402 { 00:08:33.402 "name": "BaseBdev1", 00:08:33.402 "aliases": [ 00:08:33.402 "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22" 00:08:33.402 ], 00:08:33.402 "product_name": "Malloc disk", 00:08:33.402 "block_size": 512, 00:08:33.402 "num_blocks": 65536, 00:08:33.402 "uuid": "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22", 00:08:33.402 "assigned_rate_limits": { 00:08:33.402 "rw_ios_per_sec": 0, 00:08:33.402 "rw_mbytes_per_sec": 0, 00:08:33.402 "r_mbytes_per_sec": 0, 00:08:33.402 "w_mbytes_per_sec": 0 00:08:33.402 }, 00:08:33.402 "claimed": true, 00:08:33.402 "claim_type": "exclusive_write", 00:08:33.402 "zoned": false, 00:08:33.402 "supported_io_types": { 00:08:33.402 "read": true, 00:08:33.402 "write": true, 00:08:33.402 "unmap": true, 00:08:33.402 "flush": true, 00:08:33.402 "reset": true, 00:08:33.402 "nvme_admin": false, 00:08:33.402 "nvme_io": false, 00:08:33.402 "nvme_io_md": false, 00:08:33.402 "write_zeroes": true, 00:08:33.402 "zcopy": true, 00:08:33.402 "get_zone_info": false, 00:08:33.402 "zone_management": false, 00:08:33.402 "zone_append": false, 00:08:33.402 "compare": false, 00:08:33.402 "compare_and_write": false, 00:08:33.402 "abort": true, 00:08:33.402 "seek_hole": false, 00:08:33.402 "seek_data": false, 00:08:33.402 "copy": true, 00:08:33.403 "nvme_iov_md": false 00:08:33.403 }, 00:08:33.403 "memory_domains": [ 00:08:33.403 { 00:08:33.403 "dma_device_id": "system", 00:08:33.403 "dma_device_type": 1 00:08:33.403 }, 00:08:33.403 { 00:08:33.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.403 "dma_device_type": 2 00:08:33.403 } 00:08:33.403 ], 00:08:33.403 "driver_specific": {} 00:08:33.403 } 00:08:33.403 ] 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.403 "name": "Existed_Raid", 00:08:33.403 "uuid": "0bee90e5-c5f6-4092-82c1-439daffda267", 00:08:33.403 "strip_size_kb": 0, 00:08:33.403 "state": "configuring", 00:08:33.403 "raid_level": "raid1", 00:08:33.403 "superblock": true, 00:08:33.403 "num_base_bdevs": 2, 00:08:33.403 "num_base_bdevs_discovered": 1, 00:08:33.403 "num_base_bdevs_operational": 2, 00:08:33.403 "base_bdevs_list": [ 00:08:33.403 { 00:08:33.403 "name": "BaseBdev1", 00:08:33.403 "uuid": "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22", 00:08:33.403 "is_configured": true, 00:08:33.403 "data_offset": 2048, 00:08:33.403 "data_size": 63488 00:08:33.403 }, 00:08:33.403 { 00:08:33.403 "name": "BaseBdev2", 00:08:33.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.403 "is_configured": false, 00:08:33.403 "data_offset": 0, 00:08:33.403 "data_size": 0 00:08:33.403 } 00:08:33.403 ] 00:08:33.403 }' 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.403 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.973 [2024-11-18 13:25:03.803304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.973 [2024-11-18 13:25:03.803366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.973 [2024-11-18 13:25:03.815333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.973 [2024-11-18 13:25:03.817170] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.973 [2024-11-18 13:25:03.817214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.973 "name": "Existed_Raid", 00:08:33.973 "uuid": "8453d645-1ebb-4fa3-921b-eef723711f99", 00:08:33.973 "strip_size_kb": 0, 00:08:33.973 "state": "configuring", 00:08:33.973 "raid_level": "raid1", 00:08:33.973 "superblock": true, 00:08:33.973 "num_base_bdevs": 2, 00:08:33.973 "num_base_bdevs_discovered": 1, 00:08:33.973 "num_base_bdevs_operational": 2, 00:08:33.973 "base_bdevs_list": [ 00:08:33.973 { 00:08:33.973 "name": "BaseBdev1", 00:08:33.973 "uuid": "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22", 00:08:33.973 "is_configured": true, 00:08:33.973 "data_offset": 2048, 00:08:33.973 "data_size": 63488 00:08:33.973 }, 00:08:33.973 { 00:08:33.973 "name": "BaseBdev2", 00:08:33.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.973 "is_configured": false, 00:08:33.973 "data_offset": 0, 00:08:33.973 "data_size": 0 00:08:33.973 } 00:08:33.973 ] 00:08:33.973 }' 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.973 13:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.542 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.542 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.543 [2024-11-18 13:25:04.356459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.543 [2024-11-18 13:25:04.356720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:34.543 [2024-11-18 13:25:04.356736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.543 [2024-11-18 13:25:04.356983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.543 [2024-11-18 13:25:04.357160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:34.543 [2024-11-18 13:25:04.357177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:34.543 [2024-11-18 13:25:04.357317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.543 BaseBdev2 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.543 [ 00:08:34.543 { 00:08:34.543 "name": "BaseBdev2", 00:08:34.543 "aliases": [ 00:08:34.543 "e6a66082-e0e6-4f14-a3a6-23c6f5feaad1" 00:08:34.543 ], 00:08:34.543 "product_name": "Malloc disk", 00:08:34.543 "block_size": 512, 00:08:34.543 "num_blocks": 65536, 00:08:34.543 "uuid": "e6a66082-e0e6-4f14-a3a6-23c6f5feaad1", 00:08:34.543 "assigned_rate_limits": { 00:08:34.543 "rw_ios_per_sec": 0, 00:08:34.543 "rw_mbytes_per_sec": 0, 00:08:34.543 "r_mbytes_per_sec": 0, 00:08:34.543 "w_mbytes_per_sec": 0 00:08:34.543 }, 00:08:34.543 "claimed": true, 00:08:34.543 "claim_type": "exclusive_write", 00:08:34.543 "zoned": false, 00:08:34.543 "supported_io_types": { 00:08:34.543 "read": true, 00:08:34.543 "write": true, 00:08:34.543 "unmap": true, 00:08:34.543 "flush": true, 00:08:34.543 "reset": true, 00:08:34.543 "nvme_admin": false, 00:08:34.543 "nvme_io": false, 00:08:34.543 "nvme_io_md": false, 00:08:34.543 "write_zeroes": true, 00:08:34.543 "zcopy": true, 00:08:34.543 "get_zone_info": false, 00:08:34.543 "zone_management": false, 00:08:34.543 "zone_append": false, 00:08:34.543 "compare": false, 00:08:34.543 "compare_and_write": false, 00:08:34.543 "abort": true, 00:08:34.543 "seek_hole": false, 00:08:34.543 "seek_data": false, 00:08:34.543 "copy": true, 00:08:34.543 "nvme_iov_md": false 00:08:34.543 }, 00:08:34.543 "memory_domains": [ 00:08:34.543 { 00:08:34.543 "dma_device_id": "system", 00:08:34.543 "dma_device_type": 1 00:08:34.543 }, 00:08:34.543 { 00:08:34.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.543 "dma_device_type": 2 00:08:34.543 } 00:08:34.543 ], 00:08:34.543 "driver_specific": {} 00:08:34.543 } 00:08:34.543 ] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.543 "name": "Existed_Raid", 00:08:34.543 "uuid": "8453d645-1ebb-4fa3-921b-eef723711f99", 00:08:34.543 "strip_size_kb": 0, 00:08:34.543 "state": "online", 00:08:34.543 "raid_level": "raid1", 00:08:34.543 "superblock": true, 00:08:34.543 "num_base_bdevs": 2, 00:08:34.543 "num_base_bdevs_discovered": 2, 00:08:34.543 "num_base_bdevs_operational": 2, 00:08:34.543 "base_bdevs_list": [ 00:08:34.543 { 00:08:34.543 "name": "BaseBdev1", 00:08:34.543 "uuid": "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22", 00:08:34.543 "is_configured": true, 00:08:34.543 "data_offset": 2048, 00:08:34.543 "data_size": 63488 00:08:34.543 }, 00:08:34.543 { 00:08:34.543 "name": "BaseBdev2", 00:08:34.543 "uuid": "e6a66082-e0e6-4f14-a3a6-23c6f5feaad1", 00:08:34.543 "is_configured": true, 00:08:34.543 "data_offset": 2048, 00:08:34.543 "data_size": 63488 00:08:34.543 } 00:08:34.543 ] 00:08:34.543 }' 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.543 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.803 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.803 [2024-11-18 13:25:04.848018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.063 "name": "Existed_Raid", 00:08:35.063 "aliases": [ 00:08:35.063 "8453d645-1ebb-4fa3-921b-eef723711f99" 00:08:35.063 ], 00:08:35.063 "product_name": "Raid Volume", 00:08:35.063 "block_size": 512, 00:08:35.063 "num_blocks": 63488, 00:08:35.063 "uuid": "8453d645-1ebb-4fa3-921b-eef723711f99", 00:08:35.063 "assigned_rate_limits": { 00:08:35.063 "rw_ios_per_sec": 0, 00:08:35.063 "rw_mbytes_per_sec": 0, 00:08:35.063 "r_mbytes_per_sec": 0, 00:08:35.063 "w_mbytes_per_sec": 0 00:08:35.063 }, 00:08:35.063 "claimed": false, 00:08:35.063 "zoned": false, 00:08:35.063 "supported_io_types": { 00:08:35.063 "read": true, 00:08:35.063 "write": true, 00:08:35.063 "unmap": false, 00:08:35.063 "flush": false, 00:08:35.063 "reset": true, 00:08:35.063 "nvme_admin": false, 00:08:35.063 "nvme_io": false, 00:08:35.063 "nvme_io_md": false, 00:08:35.063 "write_zeroes": true, 00:08:35.063 "zcopy": false, 00:08:35.063 "get_zone_info": false, 00:08:35.063 "zone_management": false, 00:08:35.063 "zone_append": false, 00:08:35.063 "compare": false, 00:08:35.063 "compare_and_write": false, 00:08:35.063 "abort": false, 00:08:35.063 "seek_hole": false, 00:08:35.063 "seek_data": false, 00:08:35.063 "copy": false, 00:08:35.063 "nvme_iov_md": false 00:08:35.063 }, 00:08:35.063 "memory_domains": [ 00:08:35.063 { 00:08:35.063 "dma_device_id": "system", 00:08:35.063 "dma_device_type": 1 00:08:35.063 }, 00:08:35.063 { 00:08:35.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.063 "dma_device_type": 2 00:08:35.063 }, 00:08:35.063 { 00:08:35.063 "dma_device_id": "system", 00:08:35.063 "dma_device_type": 1 00:08:35.063 }, 00:08:35.063 { 00:08:35.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.063 "dma_device_type": 2 00:08:35.063 } 00:08:35.063 ], 00:08:35.063 "driver_specific": { 00:08:35.063 "raid": { 00:08:35.063 "uuid": "8453d645-1ebb-4fa3-921b-eef723711f99", 00:08:35.063 "strip_size_kb": 0, 00:08:35.063 "state": "online", 00:08:35.063 "raid_level": "raid1", 00:08:35.063 "superblock": true, 00:08:35.063 "num_base_bdevs": 2, 00:08:35.063 "num_base_bdevs_discovered": 2, 00:08:35.063 "num_base_bdevs_operational": 2, 00:08:35.063 "base_bdevs_list": [ 00:08:35.063 { 00:08:35.063 "name": "BaseBdev1", 00:08:35.063 "uuid": "e8a8de16-8541-43ed-bd30-b4e6fbb5dd22", 00:08:35.063 "is_configured": true, 00:08:35.063 "data_offset": 2048, 00:08:35.063 "data_size": 63488 00:08:35.063 }, 00:08:35.063 { 00:08:35.063 "name": "BaseBdev2", 00:08:35.063 "uuid": "e6a66082-e0e6-4f14-a3a6-23c6f5feaad1", 00:08:35.063 "is_configured": true, 00:08:35.063 "data_offset": 2048, 00:08:35.063 "data_size": 63488 00:08:35.063 } 00:08:35.063 ] 00:08:35.063 } 00:08:35.063 } 00:08:35.063 }' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.063 BaseBdev2' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.063 13:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.063 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.063 [2024-11-18 13:25:05.063400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.322 "name": "Existed_Raid", 00:08:35.322 "uuid": "8453d645-1ebb-4fa3-921b-eef723711f99", 00:08:35.322 "strip_size_kb": 0, 00:08:35.322 "state": "online", 00:08:35.322 "raid_level": "raid1", 00:08:35.322 "superblock": true, 00:08:35.322 "num_base_bdevs": 2, 00:08:35.322 "num_base_bdevs_discovered": 1, 00:08:35.322 "num_base_bdevs_operational": 1, 00:08:35.322 "base_bdevs_list": [ 00:08:35.322 { 00:08:35.322 "name": null, 00:08:35.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.322 "is_configured": false, 00:08:35.322 "data_offset": 0, 00:08:35.322 "data_size": 63488 00:08:35.322 }, 00:08:35.322 { 00:08:35.322 "name": "BaseBdev2", 00:08:35.322 "uuid": "e6a66082-e0e6-4f14-a3a6-23c6f5feaad1", 00:08:35.322 "is_configured": true, 00:08:35.322 "data_offset": 2048, 00:08:35.322 "data_size": 63488 00:08:35.322 } 00:08:35.322 ] 00:08:35.322 }' 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.322 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.650 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.650 [2024-11-18 13:25:05.669722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.650 [2024-11-18 13:25:05.669875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.929 [2024-11-18 13:25:05.766455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.929 [2024-11-18 13:25:05.766513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.929 [2024-11-18 13:25:05.766526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62977 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62977 ']' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62977 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62977 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62977' 00:08:35.929 killing process with pid 62977 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62977 00:08:35.929 [2024-11-18 13:25:05.863797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.929 13:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62977 00:08:35.929 [2024-11-18 13:25:05.880239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.310 13:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:37.310 00:08:37.310 real 0m5.145s 00:08:37.310 user 0m7.387s 00:08:37.310 sys 0m0.911s 00:08:37.310 13:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.310 13:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.310 ************************************ 00:08:37.310 END TEST raid_state_function_test_sb 00:08:37.310 ************************************ 00:08:37.310 13:25:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:37.310 13:25:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:37.310 13:25:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.310 13:25:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.310 ************************************ 00:08:37.310 START TEST raid_superblock_test 00:08:37.310 ************************************ 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:37.310 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63229 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63229 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63229 ']' 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.311 13:25:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.311 [2024-11-18 13:25:07.173854] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:37.311 [2024-11-18 13:25:07.174019] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:08:37.311 [2024-11-18 13:25:07.349481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.570 [2024-11-18 13:25:07.462763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.829 [2024-11-18 13:25:07.664161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.829 [2024-11-18 13:25:07.664205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 malloc1 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 [2024-11-18 13:25:08.113657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:38.089 [2024-11-18 13:25:08.113718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.089 [2024-11-18 13:25:08.113740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:38.089 [2024-11-18 13:25:08.113751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.089 [2024-11-18 13:25:08.115802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.089 [2024-11-18 13:25:08.115839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:38.089 pt1 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.089 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.349 malloc2 00:08:38.349 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.349 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:38.349 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.349 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.349 [2024-11-18 13:25:08.168070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:38.349 [2024-11-18 13:25:08.168120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.350 [2024-11-18 13:25:08.168156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:38.350 [2024-11-18 13:25:08.168165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.350 [2024-11-18 13:25:08.170169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.350 [2024-11-18 13:25:08.170218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:38.350 pt2 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.350 [2024-11-18 13:25:08.180108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:38.350 [2024-11-18 13:25:08.181787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:38.350 [2024-11-18 13:25:08.181948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:38.350 [2024-11-18 13:25:08.181972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:38.350 [2024-11-18 13:25:08.182227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.350 [2024-11-18 13:25:08.182384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:38.350 [2024-11-18 13:25:08.182404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:38.350 [2024-11-18 13:25:08.182529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.350 "name": "raid_bdev1", 00:08:38.350 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:38.350 "strip_size_kb": 0, 00:08:38.350 "state": "online", 00:08:38.350 "raid_level": "raid1", 00:08:38.350 "superblock": true, 00:08:38.350 "num_base_bdevs": 2, 00:08:38.350 "num_base_bdevs_discovered": 2, 00:08:38.350 "num_base_bdevs_operational": 2, 00:08:38.350 "base_bdevs_list": [ 00:08:38.350 { 00:08:38.350 "name": "pt1", 00:08:38.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.350 "is_configured": true, 00:08:38.350 "data_offset": 2048, 00:08:38.350 "data_size": 63488 00:08:38.350 }, 00:08:38.350 { 00:08:38.350 "name": "pt2", 00:08:38.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.350 "is_configured": true, 00:08:38.350 "data_offset": 2048, 00:08:38.350 "data_size": 63488 00:08:38.350 } 00:08:38.350 ] 00:08:38.350 }' 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.350 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.609 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:38.609 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-11-18 13:25:08.671580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.869 "name": "raid_bdev1", 00:08:38.869 "aliases": [ 00:08:38.869 "704cbab4-3db4-4382-b28e-1fa44c2f8a9f" 00:08:38.869 ], 00:08:38.869 "product_name": "Raid Volume", 00:08:38.869 "block_size": 512, 00:08:38.869 "num_blocks": 63488, 00:08:38.869 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:38.869 "assigned_rate_limits": { 00:08:38.869 "rw_ios_per_sec": 0, 00:08:38.869 "rw_mbytes_per_sec": 0, 00:08:38.869 "r_mbytes_per_sec": 0, 00:08:38.869 "w_mbytes_per_sec": 0 00:08:38.869 }, 00:08:38.869 "claimed": false, 00:08:38.869 "zoned": false, 00:08:38.869 "supported_io_types": { 00:08:38.869 "read": true, 00:08:38.869 "write": true, 00:08:38.869 "unmap": false, 00:08:38.869 "flush": false, 00:08:38.869 "reset": true, 00:08:38.869 "nvme_admin": false, 00:08:38.869 "nvme_io": false, 00:08:38.869 "nvme_io_md": false, 00:08:38.869 "write_zeroes": true, 00:08:38.869 "zcopy": false, 00:08:38.869 "get_zone_info": false, 00:08:38.869 "zone_management": false, 00:08:38.869 "zone_append": false, 00:08:38.869 "compare": false, 00:08:38.869 "compare_and_write": false, 00:08:38.869 "abort": false, 00:08:38.869 "seek_hole": false, 00:08:38.869 "seek_data": false, 00:08:38.869 "copy": false, 00:08:38.869 "nvme_iov_md": false 00:08:38.869 }, 00:08:38.869 "memory_domains": [ 00:08:38.869 { 00:08:38.869 "dma_device_id": "system", 00:08:38.869 "dma_device_type": 1 00:08:38.869 }, 00:08:38.869 { 00:08:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.869 "dma_device_type": 2 00:08:38.869 }, 00:08:38.869 { 00:08:38.869 "dma_device_id": "system", 00:08:38.869 "dma_device_type": 1 00:08:38.869 }, 00:08:38.869 { 00:08:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.869 "dma_device_type": 2 00:08:38.869 } 00:08:38.869 ], 00:08:38.869 "driver_specific": { 00:08:38.869 "raid": { 00:08:38.869 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:38.869 "strip_size_kb": 0, 00:08:38.869 "state": "online", 00:08:38.869 "raid_level": "raid1", 00:08:38.869 "superblock": true, 00:08:38.869 "num_base_bdevs": 2, 00:08:38.869 "num_base_bdevs_discovered": 2, 00:08:38.869 "num_base_bdevs_operational": 2, 00:08:38.869 "base_bdevs_list": [ 00:08:38.869 { 00:08:38.869 "name": "pt1", 00:08:38.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.869 "is_configured": true, 00:08:38.869 "data_offset": 2048, 00:08:38.869 "data_size": 63488 00:08:38.869 }, 00:08:38.869 { 00:08:38.869 "name": "pt2", 00:08:38.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.869 "is_configured": true, 00:08:38.869 "data_offset": 2048, 00:08:38.869 "data_size": 63488 00:08:38.869 } 00:08:38.869 ] 00:08:38.869 } 00:08:38.869 } 00:08:38.869 }' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:38.869 pt2' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:38.869 [2024-11-18 13:25:08.879194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.869 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=704cbab4-3db4-4382-b28e-1fa44c2f8a9f 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 704cbab4-3db4-4382-b28e-1fa44c2f8a9f ']' 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.129 [2024-11-18 13:25:08.930779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.129 [2024-11-18 13:25:08.930815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.129 [2024-11-18 13:25:08.930908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.129 [2024-11-18 13:25:08.930993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.129 [2024-11-18 13:25:08.931018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.129 13:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:39.129 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.130 [2024-11-18 13:25:09.066585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:39.130 [2024-11-18 13:25:09.068604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:39.130 [2024-11-18 13:25:09.068674] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:39.130 [2024-11-18 13:25:09.068728] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:39.130 [2024-11-18 13:25:09.068744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.130 [2024-11-18 13:25:09.068755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:39.130 request: 00:08:39.130 { 00:08:39.130 "name": "raid_bdev1", 00:08:39.130 "raid_level": "raid1", 00:08:39.130 "base_bdevs": [ 00:08:39.130 "malloc1", 00:08:39.130 "malloc2" 00:08:39.130 ], 00:08:39.130 "superblock": false, 00:08:39.130 "method": "bdev_raid_create", 00:08:39.130 "req_id": 1 00:08:39.130 } 00:08:39.130 Got JSON-RPC error response 00:08:39.130 response: 00:08:39.130 { 00:08:39.130 "code": -17, 00:08:39.130 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:39.130 } 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.130 [2024-11-18 13:25:09.134439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:39.130 [2024-11-18 13:25:09.134503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.130 [2024-11-18 13:25:09.134522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:39.130 [2024-11-18 13:25:09.134535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.130 [2024-11-18 13:25:09.136862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.130 [2024-11-18 13:25:09.136908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:39.130 [2024-11-18 13:25:09.136999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:39.130 [2024-11-18 13:25:09.137071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.130 pt1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.130 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.388 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.388 "name": "raid_bdev1", 00:08:39.388 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:39.388 "strip_size_kb": 0, 00:08:39.388 "state": "configuring", 00:08:39.388 "raid_level": "raid1", 00:08:39.388 "superblock": true, 00:08:39.388 "num_base_bdevs": 2, 00:08:39.388 "num_base_bdevs_discovered": 1, 00:08:39.388 "num_base_bdevs_operational": 2, 00:08:39.388 "base_bdevs_list": [ 00:08:39.388 { 00:08:39.388 "name": "pt1", 00:08:39.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.388 "is_configured": true, 00:08:39.388 "data_offset": 2048, 00:08:39.388 "data_size": 63488 00:08:39.388 }, 00:08:39.388 { 00:08:39.388 "name": null, 00:08:39.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.388 "is_configured": false, 00:08:39.388 "data_offset": 2048, 00:08:39.388 "data_size": 63488 00:08:39.388 } 00:08:39.388 ] 00:08:39.388 }' 00:08:39.388 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.388 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.647 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.647 [2024-11-18 13:25:09.621786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.647 [2024-11-18 13:25:09.621874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.648 [2024-11-18 13:25:09.621896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:39.648 [2024-11-18 13:25:09.621907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.648 [2024-11-18 13:25:09.622384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.648 [2024-11-18 13:25:09.622405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.648 [2024-11-18 13:25:09.622488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:39.648 [2024-11-18 13:25:09.622512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.648 [2024-11-18 13:25:09.622625] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.648 [2024-11-18 13:25:09.622636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.648 [2024-11-18 13:25:09.622860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:39.648 [2024-11-18 13:25:09.623011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.648 [2024-11-18 13:25:09.623027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:39.648 [2024-11-18 13:25:09.623171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.648 pt2 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.648 "name": "raid_bdev1", 00:08:39.648 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:39.648 "strip_size_kb": 0, 00:08:39.648 "state": "online", 00:08:39.648 "raid_level": "raid1", 00:08:39.648 "superblock": true, 00:08:39.648 "num_base_bdevs": 2, 00:08:39.648 "num_base_bdevs_discovered": 2, 00:08:39.648 "num_base_bdevs_operational": 2, 00:08:39.648 "base_bdevs_list": [ 00:08:39.648 { 00:08:39.648 "name": "pt1", 00:08:39.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:39.648 "is_configured": true, 00:08:39.648 "data_offset": 2048, 00:08:39.648 "data_size": 63488 00:08:39.648 }, 00:08:39.648 { 00:08:39.648 "name": "pt2", 00:08:39.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.648 "is_configured": true, 00:08:39.648 "data_offset": 2048, 00:08:39.648 "data_size": 63488 00:08:39.648 } 00:08:39.648 ] 00:08:39.648 }' 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.648 13:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.217 [2024-11-18 13:25:10.105316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.217 "name": "raid_bdev1", 00:08:40.217 "aliases": [ 00:08:40.217 "704cbab4-3db4-4382-b28e-1fa44c2f8a9f" 00:08:40.217 ], 00:08:40.217 "product_name": "Raid Volume", 00:08:40.217 "block_size": 512, 00:08:40.217 "num_blocks": 63488, 00:08:40.217 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:40.217 "assigned_rate_limits": { 00:08:40.217 "rw_ios_per_sec": 0, 00:08:40.217 "rw_mbytes_per_sec": 0, 00:08:40.217 "r_mbytes_per_sec": 0, 00:08:40.217 "w_mbytes_per_sec": 0 00:08:40.217 }, 00:08:40.217 "claimed": false, 00:08:40.217 "zoned": false, 00:08:40.217 "supported_io_types": { 00:08:40.217 "read": true, 00:08:40.217 "write": true, 00:08:40.217 "unmap": false, 00:08:40.217 "flush": false, 00:08:40.217 "reset": true, 00:08:40.217 "nvme_admin": false, 00:08:40.217 "nvme_io": false, 00:08:40.217 "nvme_io_md": false, 00:08:40.217 "write_zeroes": true, 00:08:40.217 "zcopy": false, 00:08:40.217 "get_zone_info": false, 00:08:40.217 "zone_management": false, 00:08:40.217 "zone_append": false, 00:08:40.217 "compare": false, 00:08:40.217 "compare_and_write": false, 00:08:40.217 "abort": false, 00:08:40.217 "seek_hole": false, 00:08:40.217 "seek_data": false, 00:08:40.217 "copy": false, 00:08:40.217 "nvme_iov_md": false 00:08:40.217 }, 00:08:40.217 "memory_domains": [ 00:08:40.217 { 00:08:40.217 "dma_device_id": "system", 00:08:40.217 "dma_device_type": 1 00:08:40.217 }, 00:08:40.217 { 00:08:40.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.217 "dma_device_type": 2 00:08:40.217 }, 00:08:40.217 { 00:08:40.217 "dma_device_id": "system", 00:08:40.217 "dma_device_type": 1 00:08:40.217 }, 00:08:40.217 { 00:08:40.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.217 "dma_device_type": 2 00:08:40.217 } 00:08:40.217 ], 00:08:40.217 "driver_specific": { 00:08:40.217 "raid": { 00:08:40.217 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:40.217 "strip_size_kb": 0, 00:08:40.217 "state": "online", 00:08:40.217 "raid_level": "raid1", 00:08:40.217 "superblock": true, 00:08:40.217 "num_base_bdevs": 2, 00:08:40.217 "num_base_bdevs_discovered": 2, 00:08:40.217 "num_base_bdevs_operational": 2, 00:08:40.217 "base_bdevs_list": [ 00:08:40.217 { 00:08:40.217 "name": "pt1", 00:08:40.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.217 "is_configured": true, 00:08:40.217 "data_offset": 2048, 00:08:40.217 "data_size": 63488 00:08:40.217 }, 00:08:40.217 { 00:08:40.217 "name": "pt2", 00:08:40.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.217 "is_configured": true, 00:08:40.217 "data_offset": 2048, 00:08:40.217 "data_size": 63488 00:08:40.217 } 00:08:40.217 ] 00:08:40.217 } 00:08:40.217 } 00:08:40.217 }' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:40.217 pt2' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.217 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.477 [2024-11-18 13:25:10.352806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 704cbab4-3db4-4382-b28e-1fa44c2f8a9f '!=' 704cbab4-3db4-4382-b28e-1fa44c2f8a9f ']' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.477 [2024-11-18 13:25:10.400538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.477 "name": "raid_bdev1", 00:08:40.477 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:40.477 "strip_size_kb": 0, 00:08:40.477 "state": "online", 00:08:40.477 "raid_level": "raid1", 00:08:40.477 "superblock": true, 00:08:40.477 "num_base_bdevs": 2, 00:08:40.477 "num_base_bdevs_discovered": 1, 00:08:40.477 "num_base_bdevs_operational": 1, 00:08:40.477 "base_bdevs_list": [ 00:08:40.477 { 00:08:40.477 "name": null, 00:08:40.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.477 "is_configured": false, 00:08:40.477 "data_offset": 0, 00:08:40.477 "data_size": 63488 00:08:40.477 }, 00:08:40.477 { 00:08:40.477 "name": "pt2", 00:08:40.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.477 "is_configured": true, 00:08:40.477 "data_offset": 2048, 00:08:40.477 "data_size": 63488 00:08:40.477 } 00:08:40.477 ] 00:08:40.477 }' 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.477 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 [2024-11-18 13:25:10.871751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.046 [2024-11-18 13:25:10.871793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.046 [2024-11-18 13:25:10.871883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.046 [2024-11-18 13:25:10.871939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.046 [2024-11-18 13:25:10.871952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 [2024-11-18 13:25:10.927629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.046 [2024-11-18 13:25:10.927701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.046 [2024-11-18 13:25:10.927722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:41.046 [2024-11-18 13:25:10.927734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.046 [2024-11-18 13:25:10.930073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.046 [2024-11-18 13:25:10.930114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.046 [2024-11-18 13:25:10.930236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.046 [2024-11-18 13:25:10.930294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.046 [2024-11-18 13:25:10.930407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:41.046 [2024-11-18 13:25:10.930427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.046 [2024-11-18 13:25:10.930672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:41.046 [2024-11-18 13:25:10.930862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:41.046 [2024-11-18 13:25:10.930878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:41.046 [2024-11-18 13:25:10.931042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.046 pt2 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.046 "name": "raid_bdev1", 00:08:41.046 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:41.046 "strip_size_kb": 0, 00:08:41.046 "state": "online", 00:08:41.046 "raid_level": "raid1", 00:08:41.046 "superblock": true, 00:08:41.046 "num_base_bdevs": 2, 00:08:41.046 "num_base_bdevs_discovered": 1, 00:08:41.046 "num_base_bdevs_operational": 1, 00:08:41.046 "base_bdevs_list": [ 00:08:41.046 { 00:08:41.046 "name": null, 00:08:41.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.046 "is_configured": false, 00:08:41.046 "data_offset": 2048, 00:08:41.046 "data_size": 63488 00:08:41.046 }, 00:08:41.046 { 00:08:41.046 "name": "pt2", 00:08:41.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.046 "is_configured": true, 00:08:41.046 "data_offset": 2048, 00:08:41.046 "data_size": 63488 00:08:41.046 } 00:08:41.046 ] 00:08:41.046 }' 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.046 13:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.307 [2024-11-18 13:25:11.307021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.307 [2024-11-18 13:25:11.307062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.307 [2024-11-18 13:25:11.307164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.307 [2024-11-18 13:25:11.307214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.307 [2024-11-18 13:25:11.307230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.307 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.565 [2024-11-18 13:25:11.366961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.565 [2024-11-18 13:25:11.367037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.565 [2024-11-18 13:25:11.367058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:41.565 [2024-11-18 13:25:11.367069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.565 [2024-11-18 13:25:11.369257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.565 [2024-11-18 13:25:11.369292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.565 [2024-11-18 13:25:11.369383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:41.565 [2024-11-18 13:25:11.369428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.565 [2024-11-18 13:25:11.369568] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:41.565 [2024-11-18 13:25:11.369584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.565 [2024-11-18 13:25:11.369600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:41.565 [2024-11-18 13:25:11.369663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.565 [2024-11-18 13:25:11.369741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:41.565 [2024-11-18 13:25:11.369755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.565 [2024-11-18 13:25:11.369992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:41.565 [2024-11-18 13:25:11.370140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:41.565 [2024-11-18 13:25:11.370154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:41.565 [2024-11-18 13:25:11.370334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.565 pt1 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.565 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.566 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.566 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.566 "name": "raid_bdev1", 00:08:41.566 "uuid": "704cbab4-3db4-4382-b28e-1fa44c2f8a9f", 00:08:41.566 "strip_size_kb": 0, 00:08:41.566 "state": "online", 00:08:41.566 "raid_level": "raid1", 00:08:41.566 "superblock": true, 00:08:41.566 "num_base_bdevs": 2, 00:08:41.566 "num_base_bdevs_discovered": 1, 00:08:41.566 "num_base_bdevs_operational": 1, 00:08:41.566 "base_bdevs_list": [ 00:08:41.566 { 00:08:41.566 "name": null, 00:08:41.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.566 "is_configured": false, 00:08:41.566 "data_offset": 2048, 00:08:41.566 "data_size": 63488 00:08:41.566 }, 00:08:41.566 { 00:08:41.566 "name": "pt2", 00:08:41.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.566 "is_configured": true, 00:08:41.566 "data_offset": 2048, 00:08:41.566 "data_size": 63488 00:08:41.566 } 00:08:41.566 ] 00:08:41.566 }' 00:08:41.566 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.566 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.824 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:42.084 [2024-11-18 13:25:11.878413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 704cbab4-3db4-4382-b28e-1fa44c2f8a9f '!=' 704cbab4-3db4-4382-b28e-1fa44c2f8a9f ']' 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63229 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63229 ']' 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63229 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63229 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.084 killing process with pid 63229 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63229' 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63229 00:08:42.084 [2024-11-18 13:25:11.951097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.084 [2024-11-18 13:25:11.951211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.084 [2024-11-18 13:25:11.951262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.084 [2024-11-18 13:25:11.951278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:42.084 13:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63229 00:08:42.343 [2024-11-18 13:25:12.166134] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.282 13:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:43.282 00:08:43.282 real 0m6.241s 00:08:43.282 user 0m9.422s 00:08:43.282 sys 0m1.092s 00:08:43.282 13:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.282 13:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.282 ************************************ 00:08:43.282 END TEST raid_superblock_test 00:08:43.282 ************************************ 00:08:43.541 13:25:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:43.541 13:25:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.541 13:25:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.541 13:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.541 ************************************ 00:08:43.541 START TEST raid_read_error_test 00:08:43.541 ************************************ 00:08:43.541 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:43.541 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:43.541 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q59j0GiPBm 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63564 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63564 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63564 ']' 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.542 13:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.542 [2024-11-18 13:25:13.501863] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:43.542 [2024-11-18 13:25:13.502012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63564 ] 00:08:43.801 [2024-11-18 13:25:13.686007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.801 [2024-11-18 13:25:13.801220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.060 [2024-11-18 13:25:14.002860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.060 [2024-11-18 13:25:14.002928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.320 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 BaseBdev1_malloc 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 true 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 [2024-11-18 13:25:14.408103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.579 [2024-11-18 13:25:14.408167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.579 [2024-11-18 13:25:14.408187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.579 [2024-11-18 13:25:14.408198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.579 [2024-11-18 13:25:14.410269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.579 [2024-11-18 13:25:14.410310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.579 BaseBdev1 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 BaseBdev2_malloc 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 true 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.579 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 [2024-11-18 13:25:14.473530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.579 [2024-11-18 13:25:14.473584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.580 [2024-11-18 13:25:14.473601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.580 [2024-11-18 13:25:14.473612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.580 [2024-11-18 13:25:14.475683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.580 [2024-11-18 13:25:14.475723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.580 BaseBdev2 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 [2024-11-18 13:25:14.481569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.580 [2024-11-18 13:25:14.483395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.580 [2024-11-18 13:25:14.483600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.580 [2024-11-18 13:25:14.483616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.580 [2024-11-18 13:25:14.483864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:44.580 [2024-11-18 13:25:14.484050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.580 [2024-11-18 13:25:14.484068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:44.580 [2024-11-18 13:25:14.484240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.580 "name": "raid_bdev1", 00:08:44.580 "uuid": "6560feb6-54b2-4b92-bb4b-34d581c1ed0b", 00:08:44.580 "strip_size_kb": 0, 00:08:44.580 "state": "online", 00:08:44.580 "raid_level": "raid1", 00:08:44.580 "superblock": true, 00:08:44.580 "num_base_bdevs": 2, 00:08:44.580 "num_base_bdevs_discovered": 2, 00:08:44.580 "num_base_bdevs_operational": 2, 00:08:44.580 "base_bdevs_list": [ 00:08:44.580 { 00:08:44.580 "name": "BaseBdev1", 00:08:44.580 "uuid": "a58d5719-055f-515b-ad5c-d7728aec27b0", 00:08:44.580 "is_configured": true, 00:08:44.580 "data_offset": 2048, 00:08:44.580 "data_size": 63488 00:08:44.580 }, 00:08:44.580 { 00:08:44.580 "name": "BaseBdev2", 00:08:44.580 "uuid": "aa1d3468-b7eb-5d67-957f-a5d231854cad", 00:08:44.580 "is_configured": true, 00:08:44.580 "data_offset": 2048, 00:08:44.580 "data_size": 63488 00:08:44.580 } 00:08:44.580 ] 00:08:44.580 }' 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.580 13:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.155 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.155 13:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.155 [2024-11-18 13:25:15.030071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.092 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.093 "name": "raid_bdev1", 00:08:46.093 "uuid": "6560feb6-54b2-4b92-bb4b-34d581c1ed0b", 00:08:46.093 "strip_size_kb": 0, 00:08:46.093 "state": "online", 00:08:46.093 "raid_level": "raid1", 00:08:46.093 "superblock": true, 00:08:46.093 "num_base_bdevs": 2, 00:08:46.093 "num_base_bdevs_discovered": 2, 00:08:46.093 "num_base_bdevs_operational": 2, 00:08:46.093 "base_bdevs_list": [ 00:08:46.093 { 00:08:46.093 "name": "BaseBdev1", 00:08:46.093 "uuid": "a58d5719-055f-515b-ad5c-d7728aec27b0", 00:08:46.093 "is_configured": true, 00:08:46.093 "data_offset": 2048, 00:08:46.093 "data_size": 63488 00:08:46.093 }, 00:08:46.093 { 00:08:46.093 "name": "BaseBdev2", 00:08:46.093 "uuid": "aa1d3468-b7eb-5d67-957f-a5d231854cad", 00:08:46.093 "is_configured": true, 00:08:46.093 "data_offset": 2048, 00:08:46.093 "data_size": 63488 00:08:46.093 } 00:08:46.093 ] 00:08:46.093 }' 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.093 13:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.352 13:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.352 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.352 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.352 [2024-11-18 13:25:16.399709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.352 [2024-11-18 13:25:16.399756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.352 [2024-11-18 13:25:16.402455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.352 [2024-11-18 13:25:16.402504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.352 [2024-11-18 13:25:16.402587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.352 [2024-11-18 13:25:16.402600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.612 { 00:08:46.613 "results": [ 00:08:46.613 { 00:08:46.613 "job": "raid_bdev1", 00:08:46.613 "core_mask": "0x1", 00:08:46.613 "workload": "randrw", 00:08:46.613 "percentage": 50, 00:08:46.613 "status": "finished", 00:08:46.613 "queue_depth": 1, 00:08:46.613 "io_size": 131072, 00:08:46.613 "runtime": 1.37057, 00:08:46.613 "iops": 18063.287537302, 00:08:46.613 "mibps": 2257.91094216275, 00:08:46.613 "io_failed": 0, 00:08:46.613 "io_timeout": 0, 00:08:46.613 "avg_latency_us": 52.81414905722046, 00:08:46.613 "min_latency_us": 22.805240174672488, 00:08:46.613 "max_latency_us": 1366.5257641921398 00:08:46.613 } 00:08:46.613 ], 00:08:46.613 "core_count": 1 00:08:46.613 } 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63564 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63564 ']' 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63564 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63564 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.613 killing process with pid 63564 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63564' 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63564 00:08:46.613 [2024-11-18 13:25:16.450074] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.613 13:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63564 00:08:46.613 [2024-11-18 13:25:16.585658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q59j0GiPBm 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:47.993 13:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:47.993 00:08:47.993 real 0m4.403s 00:08:47.994 user 0m5.270s 00:08:47.994 sys 0m0.580s 00:08:47.994 13:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.994 13:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.994 ************************************ 00:08:47.994 END TEST raid_read_error_test 00:08:47.994 ************************************ 00:08:47.994 13:25:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:47.994 13:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.994 13:25:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.994 13:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.994 ************************************ 00:08:47.994 START TEST raid_write_error_test 00:08:47.994 ************************************ 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.H8spW1guM9 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63706 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63706 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63706 ']' 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.994 13:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.994 [2024-11-18 13:25:17.962333] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:47.994 [2024-11-18 13:25:17.962462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63706 ] 00:08:48.254 [2024-11-18 13:25:18.137728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.255 [2024-11-18 13:25:18.254888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.514 [2024-11-18 13:25:18.463914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.514 [2024-11-18 13:25:18.463950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.774 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 BaseBdev1_malloc 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 true 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 [2024-11-18 13:25:18.866165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:49.034 [2024-11-18 13:25:18.866241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.034 [2024-11-18 13:25:18.866264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:49.034 [2024-11-18 13:25:18.866278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.034 [2024-11-18 13:25:18.868507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.034 [2024-11-18 13:25:18.868549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:49.034 BaseBdev1 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 BaseBdev2_malloc 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 true 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 [2024-11-18 13:25:18.927255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:49.034 [2024-11-18 13:25:18.927317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.034 [2024-11-18 13:25:18.927335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:49.034 [2024-11-18 13:25:18.927347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.034 [2024-11-18 13:25:18.929387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.034 [2024-11-18 13:25:18.929427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:49.034 BaseBdev2 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.034 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.034 [2024-11-18 13:25:18.935280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.034 [2024-11-18 13:25:18.937090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.034 [2024-11-18 13:25:18.937299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.034 [2024-11-18 13:25:18.937322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:49.035 [2024-11-18 13:25:18.937559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.035 [2024-11-18 13:25:18.937744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.035 [2024-11-18 13:25:18.937762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.035 [2024-11-18 13:25:18.937910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.035 "name": "raid_bdev1", 00:08:49.035 "uuid": "7bc5f5f2-f6ef-43be-9d28-a95852009ff7", 00:08:49.035 "strip_size_kb": 0, 00:08:49.035 "state": "online", 00:08:49.035 "raid_level": "raid1", 00:08:49.035 "superblock": true, 00:08:49.035 "num_base_bdevs": 2, 00:08:49.035 "num_base_bdevs_discovered": 2, 00:08:49.035 "num_base_bdevs_operational": 2, 00:08:49.035 "base_bdevs_list": [ 00:08:49.035 { 00:08:49.035 "name": "BaseBdev1", 00:08:49.035 "uuid": "5f6d4775-ce28-50ce-8199-30f3d0e78c6e", 00:08:49.035 "is_configured": true, 00:08:49.035 "data_offset": 2048, 00:08:49.035 "data_size": 63488 00:08:49.035 }, 00:08:49.035 { 00:08:49.035 "name": "BaseBdev2", 00:08:49.035 "uuid": "f82a0729-301d-5324-92f0-8a97216cd867", 00:08:49.035 "is_configured": true, 00:08:49.035 "data_offset": 2048, 00:08:49.035 "data_size": 63488 00:08:49.035 } 00:08:49.035 ] 00:08:49.035 }' 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.035 13:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.618 13:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.618 13:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.618 [2024-11-18 13:25:19.503738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.559 [2024-11-18 13:25:20.412357] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:50.559 [2024-11-18 13:25:20.412437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.559 [2024-11-18 13:25:20.412639] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.559 "name": "raid_bdev1", 00:08:50.559 "uuid": "7bc5f5f2-f6ef-43be-9d28-a95852009ff7", 00:08:50.559 "strip_size_kb": 0, 00:08:50.559 "state": "online", 00:08:50.559 "raid_level": "raid1", 00:08:50.559 "superblock": true, 00:08:50.559 "num_base_bdevs": 2, 00:08:50.559 "num_base_bdevs_discovered": 1, 00:08:50.559 "num_base_bdevs_operational": 1, 00:08:50.559 "base_bdevs_list": [ 00:08:50.559 { 00:08:50.559 "name": null, 00:08:50.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.559 "is_configured": false, 00:08:50.559 "data_offset": 0, 00:08:50.559 "data_size": 63488 00:08:50.559 }, 00:08:50.559 { 00:08:50.559 "name": "BaseBdev2", 00:08:50.559 "uuid": "f82a0729-301d-5324-92f0-8a97216cd867", 00:08:50.559 "is_configured": true, 00:08:50.559 "data_offset": 2048, 00:08:50.559 "data_size": 63488 00:08:50.559 } 00:08:50.559 ] 00:08:50.559 }' 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.559 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.129 [2024-11-18 13:25:20.882898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.129 [2024-11-18 13:25:20.882948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.129 [2024-11-18 13:25:20.885485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.129 [2024-11-18 13:25:20.885530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.129 [2024-11-18 13:25:20.885590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.129 [2024-11-18 13:25:20.885600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63706 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63706 ']' 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63706 00:08:51.129 { 00:08:51.129 "results": [ 00:08:51.129 { 00:08:51.129 "job": "raid_bdev1", 00:08:51.129 "core_mask": "0x1", 00:08:51.129 "workload": "randrw", 00:08:51.129 "percentage": 50, 00:08:51.129 "status": "finished", 00:08:51.129 "queue_depth": 1, 00:08:51.129 "io_size": 131072, 00:08:51.129 "runtime": 1.379959, 00:08:51.129 "iops": 19478.114929501528, 00:08:51.129 "mibps": 2434.764366187691, 00:08:51.129 "io_failed": 0, 00:08:51.129 "io_timeout": 0, 00:08:51.129 "avg_latency_us": 48.52580402778682, 00:08:51.129 "min_latency_us": 22.246288209606988, 00:08:51.129 "max_latency_us": 1717.1004366812226 00:08:51.129 } 00:08:51.129 ], 00:08:51.129 "core_count": 1 00:08:51.129 } 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63706 00:08:51.129 killing process with pid 63706 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63706' 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63706 00:08:51.129 13:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63706 00:08:51.129 [2024-11-18 13:25:20.935019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.129 [2024-11-18 13:25:21.073728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.H8spW1guM9 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:52.512 00:08:52.512 real 0m4.427s 00:08:52.512 user 0m5.308s 00:08:52.512 sys 0m0.580s 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.512 13:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.512 ************************************ 00:08:52.512 END TEST raid_write_error_test 00:08:52.512 ************************************ 00:08:52.513 13:25:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:52.513 13:25:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:52.513 13:25:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:52.513 13:25:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.513 13:25:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.513 13:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.513 ************************************ 00:08:52.513 START TEST raid_state_function_test 00:08:52.513 ************************************ 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63844 00:08:52.513 Process raid pid: 63844 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63844' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63844 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63844 ']' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.513 13:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.513 [2024-11-18 13:25:22.455957] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:52.513 [2024-11-18 13:25:22.456092] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.773 [2024-11-18 13:25:22.637596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.773 [2024-11-18 13:25:22.752762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.033 [2024-11-18 13:25:22.957654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.033 [2024-11-18 13:25:22.957699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.293 [2024-11-18 13:25:23.298499] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.293 [2024-11-18 13:25:23.298565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.293 [2024-11-18 13:25:23.298576] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.293 [2024-11-18 13:25:23.298586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.293 [2024-11-18 13:25:23.298593] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.293 [2024-11-18 13:25:23.298601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.293 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.552 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.553 "name": "Existed_Raid", 00:08:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.553 "strip_size_kb": 64, 00:08:53.553 "state": "configuring", 00:08:53.553 "raid_level": "raid0", 00:08:53.553 "superblock": false, 00:08:53.553 "num_base_bdevs": 3, 00:08:53.553 "num_base_bdevs_discovered": 0, 00:08:53.553 "num_base_bdevs_operational": 3, 00:08:53.553 "base_bdevs_list": [ 00:08:53.553 { 00:08:53.553 "name": "BaseBdev1", 00:08:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.553 "is_configured": false, 00:08:53.553 "data_offset": 0, 00:08:53.553 "data_size": 0 00:08:53.553 }, 00:08:53.553 { 00:08:53.553 "name": "BaseBdev2", 00:08:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.553 "is_configured": false, 00:08:53.553 "data_offset": 0, 00:08:53.553 "data_size": 0 00:08:53.553 }, 00:08:53.553 { 00:08:53.553 "name": "BaseBdev3", 00:08:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.553 "is_configured": false, 00:08:53.553 "data_offset": 0, 00:08:53.553 "data_size": 0 00:08:53.553 } 00:08:53.553 ] 00:08:53.553 }' 00:08:53.553 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.553 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.813 [2024-11-18 13:25:23.777760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.813 [2024-11-18 13:25:23.777815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.813 [2024-11-18 13:25:23.789703] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.813 [2024-11-18 13:25:23.789752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.813 [2024-11-18 13:25:23.789761] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.813 [2024-11-18 13:25:23.789771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.813 [2024-11-18 13:25:23.789777] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.813 [2024-11-18 13:25:23.789785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.813 [2024-11-18 13:25:23.832515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.813 BaseBdev1 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.813 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.814 [ 00:08:53.814 { 00:08:53.814 "name": "BaseBdev1", 00:08:53.814 "aliases": [ 00:08:53.814 "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf" 00:08:53.814 ], 00:08:53.814 "product_name": "Malloc disk", 00:08:53.814 "block_size": 512, 00:08:53.814 "num_blocks": 65536, 00:08:53.814 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:53.814 "assigned_rate_limits": { 00:08:53.814 "rw_ios_per_sec": 0, 00:08:53.814 "rw_mbytes_per_sec": 0, 00:08:53.814 "r_mbytes_per_sec": 0, 00:08:53.814 "w_mbytes_per_sec": 0 00:08:53.814 }, 00:08:53.814 "claimed": true, 00:08:53.814 "claim_type": "exclusive_write", 00:08:53.814 "zoned": false, 00:08:53.814 "supported_io_types": { 00:08:53.814 "read": true, 00:08:53.814 "write": true, 00:08:53.814 "unmap": true, 00:08:53.814 "flush": true, 00:08:53.814 "reset": true, 00:08:53.814 "nvme_admin": false, 00:08:53.814 "nvme_io": false, 00:08:53.814 "nvme_io_md": false, 00:08:53.814 "write_zeroes": true, 00:08:53.814 "zcopy": true, 00:08:53.814 "get_zone_info": false, 00:08:53.814 "zone_management": false, 00:08:53.814 "zone_append": false, 00:08:53.814 "compare": false, 00:08:53.814 "compare_and_write": false, 00:08:53.814 "abort": true, 00:08:53.814 "seek_hole": false, 00:08:53.814 "seek_data": false, 00:08:53.814 "copy": true, 00:08:53.814 "nvme_iov_md": false 00:08:53.814 }, 00:08:53.814 "memory_domains": [ 00:08:53.814 { 00:08:53.814 "dma_device_id": "system", 00:08:53.814 "dma_device_type": 1 00:08:53.814 }, 00:08:53.814 { 00:08:53.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.814 "dma_device_type": 2 00:08:53.814 } 00:08:53.814 ], 00:08:53.814 "driver_specific": {} 00:08:53.814 } 00:08:53.814 ] 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.814 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.074 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.074 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.074 "name": "Existed_Raid", 00:08:54.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.074 "strip_size_kb": 64, 00:08:54.074 "state": "configuring", 00:08:54.074 "raid_level": "raid0", 00:08:54.074 "superblock": false, 00:08:54.074 "num_base_bdevs": 3, 00:08:54.074 "num_base_bdevs_discovered": 1, 00:08:54.074 "num_base_bdevs_operational": 3, 00:08:54.074 "base_bdevs_list": [ 00:08:54.074 { 00:08:54.074 "name": "BaseBdev1", 00:08:54.074 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:54.074 "is_configured": true, 00:08:54.074 "data_offset": 0, 00:08:54.074 "data_size": 65536 00:08:54.074 }, 00:08:54.074 { 00:08:54.074 "name": "BaseBdev2", 00:08:54.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.074 "is_configured": false, 00:08:54.074 "data_offset": 0, 00:08:54.074 "data_size": 0 00:08:54.074 }, 00:08:54.074 { 00:08:54.074 "name": "BaseBdev3", 00:08:54.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.074 "is_configured": false, 00:08:54.074 "data_offset": 0, 00:08:54.074 "data_size": 0 00:08:54.074 } 00:08:54.074 ] 00:08:54.074 }' 00:08:54.074 13:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.074 13:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 [2024-11-18 13:25:24.323734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.345 [2024-11-18 13:25:24.323795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.345 [2024-11-18 13:25:24.331775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.345 [2024-11-18 13:25:24.333661] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.345 [2024-11-18 13:25:24.333703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.345 [2024-11-18 13:25:24.333713] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.345 [2024-11-18 13:25:24.333723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.345 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.346 "name": "Existed_Raid", 00:08:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.346 "strip_size_kb": 64, 00:08:54.346 "state": "configuring", 00:08:54.346 "raid_level": "raid0", 00:08:54.346 "superblock": false, 00:08:54.346 "num_base_bdevs": 3, 00:08:54.346 "num_base_bdevs_discovered": 1, 00:08:54.346 "num_base_bdevs_operational": 3, 00:08:54.346 "base_bdevs_list": [ 00:08:54.346 { 00:08:54.346 "name": "BaseBdev1", 00:08:54.346 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:54.346 "is_configured": true, 00:08:54.346 "data_offset": 0, 00:08:54.346 "data_size": 65536 00:08:54.346 }, 00:08:54.346 { 00:08:54.346 "name": "BaseBdev2", 00:08:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.346 "is_configured": false, 00:08:54.346 "data_offset": 0, 00:08:54.346 "data_size": 0 00:08:54.346 }, 00:08:54.346 { 00:08:54.346 "name": "BaseBdev3", 00:08:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.346 "is_configured": false, 00:08:54.346 "data_offset": 0, 00:08:54.346 "data_size": 0 00:08:54.346 } 00:08:54.346 ] 00:08:54.346 }' 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.346 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.929 [2024-11-18 13:25:24.792253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.929 BaseBdev2 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.929 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.930 [ 00:08:54.930 { 00:08:54.930 "name": "BaseBdev2", 00:08:54.930 "aliases": [ 00:08:54.930 "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8" 00:08:54.930 ], 00:08:54.930 "product_name": "Malloc disk", 00:08:54.930 "block_size": 512, 00:08:54.930 "num_blocks": 65536, 00:08:54.930 "uuid": "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8", 00:08:54.930 "assigned_rate_limits": { 00:08:54.930 "rw_ios_per_sec": 0, 00:08:54.930 "rw_mbytes_per_sec": 0, 00:08:54.930 "r_mbytes_per_sec": 0, 00:08:54.930 "w_mbytes_per_sec": 0 00:08:54.930 }, 00:08:54.930 "claimed": true, 00:08:54.930 "claim_type": "exclusive_write", 00:08:54.930 "zoned": false, 00:08:54.930 "supported_io_types": { 00:08:54.930 "read": true, 00:08:54.930 "write": true, 00:08:54.930 "unmap": true, 00:08:54.930 "flush": true, 00:08:54.930 "reset": true, 00:08:54.930 "nvme_admin": false, 00:08:54.930 "nvme_io": false, 00:08:54.930 "nvme_io_md": false, 00:08:54.930 "write_zeroes": true, 00:08:54.930 "zcopy": true, 00:08:54.930 "get_zone_info": false, 00:08:54.930 "zone_management": false, 00:08:54.930 "zone_append": false, 00:08:54.930 "compare": false, 00:08:54.930 "compare_and_write": false, 00:08:54.930 "abort": true, 00:08:54.930 "seek_hole": false, 00:08:54.930 "seek_data": false, 00:08:54.930 "copy": true, 00:08:54.930 "nvme_iov_md": false 00:08:54.930 }, 00:08:54.930 "memory_domains": [ 00:08:54.930 { 00:08:54.930 "dma_device_id": "system", 00:08:54.930 "dma_device_type": 1 00:08:54.930 }, 00:08:54.930 { 00:08:54.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.930 "dma_device_type": 2 00:08:54.930 } 00:08:54.930 ], 00:08:54.930 "driver_specific": {} 00:08:54.930 } 00:08:54.930 ] 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.930 "name": "Existed_Raid", 00:08:54.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.930 "strip_size_kb": 64, 00:08:54.930 "state": "configuring", 00:08:54.930 "raid_level": "raid0", 00:08:54.930 "superblock": false, 00:08:54.930 "num_base_bdevs": 3, 00:08:54.930 "num_base_bdevs_discovered": 2, 00:08:54.930 "num_base_bdevs_operational": 3, 00:08:54.930 "base_bdevs_list": [ 00:08:54.930 { 00:08:54.930 "name": "BaseBdev1", 00:08:54.930 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:54.930 "is_configured": true, 00:08:54.930 "data_offset": 0, 00:08:54.930 "data_size": 65536 00:08:54.930 }, 00:08:54.930 { 00:08:54.930 "name": "BaseBdev2", 00:08:54.930 "uuid": "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8", 00:08:54.930 "is_configured": true, 00:08:54.930 "data_offset": 0, 00:08:54.930 "data_size": 65536 00:08:54.930 }, 00:08:54.930 { 00:08:54.930 "name": "BaseBdev3", 00:08:54.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.930 "is_configured": false, 00:08:54.930 "data_offset": 0, 00:08:54.930 "data_size": 0 00:08:54.930 } 00:08:54.930 ] 00:08:54.930 }' 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.930 13:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.190 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.190 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.190 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.450 [2024-11-18 13:25:25.266493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.450 [2024-11-18 13:25:25.266540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.450 [2024-11-18 13:25:25.266554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:55.450 [2024-11-18 13:25:25.266826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.450 [2024-11-18 13:25:25.266986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.450 [2024-11-18 13:25:25.266995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.450 [2024-11-18 13:25:25.267299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.450 BaseBdev3 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.451 [ 00:08:55.451 { 00:08:55.451 "name": "BaseBdev3", 00:08:55.451 "aliases": [ 00:08:55.451 "7e4af00f-5b21-402c-bf61-d16904e40ad7" 00:08:55.451 ], 00:08:55.451 "product_name": "Malloc disk", 00:08:55.451 "block_size": 512, 00:08:55.451 "num_blocks": 65536, 00:08:55.451 "uuid": "7e4af00f-5b21-402c-bf61-d16904e40ad7", 00:08:55.451 "assigned_rate_limits": { 00:08:55.451 "rw_ios_per_sec": 0, 00:08:55.451 "rw_mbytes_per_sec": 0, 00:08:55.451 "r_mbytes_per_sec": 0, 00:08:55.451 "w_mbytes_per_sec": 0 00:08:55.451 }, 00:08:55.451 "claimed": true, 00:08:55.451 "claim_type": "exclusive_write", 00:08:55.451 "zoned": false, 00:08:55.451 "supported_io_types": { 00:08:55.451 "read": true, 00:08:55.451 "write": true, 00:08:55.451 "unmap": true, 00:08:55.451 "flush": true, 00:08:55.451 "reset": true, 00:08:55.451 "nvme_admin": false, 00:08:55.451 "nvme_io": false, 00:08:55.451 "nvme_io_md": false, 00:08:55.451 "write_zeroes": true, 00:08:55.451 "zcopy": true, 00:08:55.451 "get_zone_info": false, 00:08:55.451 "zone_management": false, 00:08:55.451 "zone_append": false, 00:08:55.451 "compare": false, 00:08:55.451 "compare_and_write": false, 00:08:55.451 "abort": true, 00:08:55.451 "seek_hole": false, 00:08:55.451 "seek_data": false, 00:08:55.451 "copy": true, 00:08:55.451 "nvme_iov_md": false 00:08:55.451 }, 00:08:55.451 "memory_domains": [ 00:08:55.451 { 00:08:55.451 "dma_device_id": "system", 00:08:55.451 "dma_device_type": 1 00:08:55.451 }, 00:08:55.451 { 00:08:55.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.451 "dma_device_type": 2 00:08:55.451 } 00:08:55.451 ], 00:08:55.451 "driver_specific": {} 00:08:55.451 } 00:08:55.451 ] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.451 "name": "Existed_Raid", 00:08:55.451 "uuid": "36933630-fa13-4235-b226-5e5b068aa527", 00:08:55.451 "strip_size_kb": 64, 00:08:55.451 "state": "online", 00:08:55.451 "raid_level": "raid0", 00:08:55.451 "superblock": false, 00:08:55.451 "num_base_bdevs": 3, 00:08:55.451 "num_base_bdevs_discovered": 3, 00:08:55.451 "num_base_bdevs_operational": 3, 00:08:55.451 "base_bdevs_list": [ 00:08:55.451 { 00:08:55.451 "name": "BaseBdev1", 00:08:55.451 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:55.451 "is_configured": true, 00:08:55.451 "data_offset": 0, 00:08:55.451 "data_size": 65536 00:08:55.451 }, 00:08:55.451 { 00:08:55.451 "name": "BaseBdev2", 00:08:55.451 "uuid": "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8", 00:08:55.451 "is_configured": true, 00:08:55.451 "data_offset": 0, 00:08:55.451 "data_size": 65536 00:08:55.451 }, 00:08:55.451 { 00:08:55.451 "name": "BaseBdev3", 00:08:55.451 "uuid": "7e4af00f-5b21-402c-bf61-d16904e40ad7", 00:08:55.451 "is_configured": true, 00:08:55.451 "data_offset": 0, 00:08:55.451 "data_size": 65536 00:08:55.451 } 00:08:55.451 ] 00:08:55.451 }' 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.451 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.711 [2024-11-18 13:25:25.742031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.711 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.972 "name": "Existed_Raid", 00:08:55.972 "aliases": [ 00:08:55.972 "36933630-fa13-4235-b226-5e5b068aa527" 00:08:55.972 ], 00:08:55.972 "product_name": "Raid Volume", 00:08:55.972 "block_size": 512, 00:08:55.972 "num_blocks": 196608, 00:08:55.972 "uuid": "36933630-fa13-4235-b226-5e5b068aa527", 00:08:55.972 "assigned_rate_limits": { 00:08:55.972 "rw_ios_per_sec": 0, 00:08:55.972 "rw_mbytes_per_sec": 0, 00:08:55.972 "r_mbytes_per_sec": 0, 00:08:55.972 "w_mbytes_per_sec": 0 00:08:55.972 }, 00:08:55.972 "claimed": false, 00:08:55.972 "zoned": false, 00:08:55.972 "supported_io_types": { 00:08:55.972 "read": true, 00:08:55.972 "write": true, 00:08:55.972 "unmap": true, 00:08:55.972 "flush": true, 00:08:55.972 "reset": true, 00:08:55.972 "nvme_admin": false, 00:08:55.972 "nvme_io": false, 00:08:55.972 "nvme_io_md": false, 00:08:55.972 "write_zeroes": true, 00:08:55.972 "zcopy": false, 00:08:55.972 "get_zone_info": false, 00:08:55.972 "zone_management": false, 00:08:55.972 "zone_append": false, 00:08:55.972 "compare": false, 00:08:55.972 "compare_and_write": false, 00:08:55.972 "abort": false, 00:08:55.972 "seek_hole": false, 00:08:55.972 "seek_data": false, 00:08:55.972 "copy": false, 00:08:55.972 "nvme_iov_md": false 00:08:55.972 }, 00:08:55.972 "memory_domains": [ 00:08:55.972 { 00:08:55.972 "dma_device_id": "system", 00:08:55.972 "dma_device_type": 1 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.972 "dma_device_type": 2 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "dma_device_id": "system", 00:08:55.972 "dma_device_type": 1 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.972 "dma_device_type": 2 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "dma_device_id": "system", 00:08:55.972 "dma_device_type": 1 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.972 "dma_device_type": 2 00:08:55.972 } 00:08:55.972 ], 00:08:55.972 "driver_specific": { 00:08:55.972 "raid": { 00:08:55.972 "uuid": "36933630-fa13-4235-b226-5e5b068aa527", 00:08:55.972 "strip_size_kb": 64, 00:08:55.972 "state": "online", 00:08:55.972 "raid_level": "raid0", 00:08:55.972 "superblock": false, 00:08:55.972 "num_base_bdevs": 3, 00:08:55.972 "num_base_bdevs_discovered": 3, 00:08:55.972 "num_base_bdevs_operational": 3, 00:08:55.972 "base_bdevs_list": [ 00:08:55.972 { 00:08:55.972 "name": "BaseBdev1", 00:08:55.972 "uuid": "9dff4e25-7838-4f97-8724-c9d0c0ab2ecf", 00:08:55.972 "is_configured": true, 00:08:55.972 "data_offset": 0, 00:08:55.972 "data_size": 65536 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "name": "BaseBdev2", 00:08:55.972 "uuid": "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8", 00:08:55.972 "is_configured": true, 00:08:55.972 "data_offset": 0, 00:08:55.972 "data_size": 65536 00:08:55.972 }, 00:08:55.972 { 00:08:55.972 "name": "BaseBdev3", 00:08:55.972 "uuid": "7e4af00f-5b21-402c-bf61-d16904e40ad7", 00:08:55.972 "is_configured": true, 00:08:55.972 "data_offset": 0, 00:08:55.972 "data_size": 65536 00:08:55.972 } 00:08:55.972 ] 00:08:55.972 } 00:08:55.972 } 00:08:55.972 }' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.972 BaseBdev2 00:08:55.972 BaseBdev3' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.972 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.973 13:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.973 [2024-11-18 13:25:25.969423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.973 [2024-11-18 13:25:25.969545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.973 [2024-11-18 13:25:25.969610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.233 "name": "Existed_Raid", 00:08:56.233 "uuid": "36933630-fa13-4235-b226-5e5b068aa527", 00:08:56.233 "strip_size_kb": 64, 00:08:56.233 "state": "offline", 00:08:56.233 "raid_level": "raid0", 00:08:56.233 "superblock": false, 00:08:56.233 "num_base_bdevs": 3, 00:08:56.233 "num_base_bdevs_discovered": 2, 00:08:56.233 "num_base_bdevs_operational": 2, 00:08:56.233 "base_bdevs_list": [ 00:08:56.233 { 00:08:56.233 "name": null, 00:08:56.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.233 "is_configured": false, 00:08:56.233 "data_offset": 0, 00:08:56.233 "data_size": 65536 00:08:56.233 }, 00:08:56.233 { 00:08:56.233 "name": "BaseBdev2", 00:08:56.233 "uuid": "c33b73fd-eb40-40d2-9d89-6f22b31cfdc8", 00:08:56.233 "is_configured": true, 00:08:56.233 "data_offset": 0, 00:08:56.233 "data_size": 65536 00:08:56.233 }, 00:08:56.233 { 00:08:56.233 "name": "BaseBdev3", 00:08:56.233 "uuid": "7e4af00f-5b21-402c-bf61-d16904e40ad7", 00:08:56.233 "is_configured": true, 00:08:56.233 "data_offset": 0, 00:08:56.233 "data_size": 65536 00:08:56.233 } 00:08:56.233 ] 00:08:56.233 }' 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.233 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.494 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.494 [2024-11-18 13:25:26.506822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 [2024-11-18 13:25:26.657145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.754 [2024-11-18 13:25:26.657286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.754 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.755 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.014 BaseBdev2 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.014 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 [ 00:08:57.015 { 00:08:57.015 "name": "BaseBdev2", 00:08:57.015 "aliases": [ 00:08:57.015 "469d9ad9-c180-47eb-afa4-ed5114fb43aa" 00:08:57.015 ], 00:08:57.015 "product_name": "Malloc disk", 00:08:57.015 "block_size": 512, 00:08:57.015 "num_blocks": 65536, 00:08:57.015 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:57.015 "assigned_rate_limits": { 00:08:57.015 "rw_ios_per_sec": 0, 00:08:57.015 "rw_mbytes_per_sec": 0, 00:08:57.015 "r_mbytes_per_sec": 0, 00:08:57.015 "w_mbytes_per_sec": 0 00:08:57.015 }, 00:08:57.015 "claimed": false, 00:08:57.015 "zoned": false, 00:08:57.015 "supported_io_types": { 00:08:57.015 "read": true, 00:08:57.015 "write": true, 00:08:57.015 "unmap": true, 00:08:57.015 "flush": true, 00:08:57.015 "reset": true, 00:08:57.015 "nvme_admin": false, 00:08:57.015 "nvme_io": false, 00:08:57.015 "nvme_io_md": false, 00:08:57.015 "write_zeroes": true, 00:08:57.015 "zcopy": true, 00:08:57.015 "get_zone_info": false, 00:08:57.015 "zone_management": false, 00:08:57.015 "zone_append": false, 00:08:57.015 "compare": false, 00:08:57.015 "compare_and_write": false, 00:08:57.015 "abort": true, 00:08:57.015 "seek_hole": false, 00:08:57.015 "seek_data": false, 00:08:57.015 "copy": true, 00:08:57.015 "nvme_iov_md": false 00:08:57.015 }, 00:08:57.015 "memory_domains": [ 00:08:57.015 { 00:08:57.015 "dma_device_id": "system", 00:08:57.015 "dma_device_type": 1 00:08:57.015 }, 00:08:57.015 { 00:08:57.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.015 "dma_device_type": 2 00:08:57.015 } 00:08:57.015 ], 00:08:57.015 "driver_specific": {} 00:08:57.015 } 00:08:57.015 ] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 BaseBdev3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 [ 00:08:57.015 { 00:08:57.015 "name": "BaseBdev3", 00:08:57.015 "aliases": [ 00:08:57.015 "924465ec-22d1-449f-b1c4-532d0ed1be8a" 00:08:57.015 ], 00:08:57.015 "product_name": "Malloc disk", 00:08:57.015 "block_size": 512, 00:08:57.015 "num_blocks": 65536, 00:08:57.015 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:57.015 "assigned_rate_limits": { 00:08:57.015 "rw_ios_per_sec": 0, 00:08:57.015 "rw_mbytes_per_sec": 0, 00:08:57.015 "r_mbytes_per_sec": 0, 00:08:57.015 "w_mbytes_per_sec": 0 00:08:57.015 }, 00:08:57.015 "claimed": false, 00:08:57.015 "zoned": false, 00:08:57.015 "supported_io_types": { 00:08:57.015 "read": true, 00:08:57.015 "write": true, 00:08:57.015 "unmap": true, 00:08:57.015 "flush": true, 00:08:57.015 "reset": true, 00:08:57.015 "nvme_admin": false, 00:08:57.015 "nvme_io": false, 00:08:57.015 "nvme_io_md": false, 00:08:57.015 "write_zeroes": true, 00:08:57.015 "zcopy": true, 00:08:57.015 "get_zone_info": false, 00:08:57.015 "zone_management": false, 00:08:57.015 "zone_append": false, 00:08:57.015 "compare": false, 00:08:57.015 "compare_and_write": false, 00:08:57.015 "abort": true, 00:08:57.015 "seek_hole": false, 00:08:57.015 "seek_data": false, 00:08:57.015 "copy": true, 00:08:57.015 "nvme_iov_md": false 00:08:57.015 }, 00:08:57.015 "memory_domains": [ 00:08:57.015 { 00:08:57.015 "dma_device_id": "system", 00:08:57.015 "dma_device_type": 1 00:08:57.015 }, 00:08:57.015 { 00:08:57.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.015 "dma_device_type": 2 00:08:57.015 } 00:08:57.015 ], 00:08:57.015 "driver_specific": {} 00:08:57.015 } 00:08:57.015 ] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 [2024-11-18 13:25:26.942762] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.015 [2024-11-18 13:25:26.942909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.015 [2024-11-18 13:25:26.942953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.015 [2024-11-18 13:25:26.944753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.015 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.015 "name": "Existed_Raid", 00:08:57.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.015 "strip_size_kb": 64, 00:08:57.015 "state": "configuring", 00:08:57.015 "raid_level": "raid0", 00:08:57.015 "superblock": false, 00:08:57.015 "num_base_bdevs": 3, 00:08:57.015 "num_base_bdevs_discovered": 2, 00:08:57.015 "num_base_bdevs_operational": 3, 00:08:57.015 "base_bdevs_list": [ 00:08:57.015 { 00:08:57.015 "name": "BaseBdev1", 00:08:57.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.015 "is_configured": false, 00:08:57.015 "data_offset": 0, 00:08:57.015 "data_size": 0 00:08:57.015 }, 00:08:57.015 { 00:08:57.015 "name": "BaseBdev2", 00:08:57.015 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:57.015 "is_configured": true, 00:08:57.015 "data_offset": 0, 00:08:57.015 "data_size": 65536 00:08:57.015 }, 00:08:57.015 { 00:08:57.015 "name": "BaseBdev3", 00:08:57.015 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:57.015 "is_configured": true, 00:08:57.015 "data_offset": 0, 00:08:57.015 "data_size": 65536 00:08:57.015 } 00:08:57.015 ] 00:08:57.015 }' 00:08:57.016 13:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.016 13:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.585 [2024-11-18 13:25:27.378044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.585 "name": "Existed_Raid", 00:08:57.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.585 "strip_size_kb": 64, 00:08:57.585 "state": "configuring", 00:08:57.585 "raid_level": "raid0", 00:08:57.585 "superblock": false, 00:08:57.585 "num_base_bdevs": 3, 00:08:57.585 "num_base_bdevs_discovered": 1, 00:08:57.585 "num_base_bdevs_operational": 3, 00:08:57.585 "base_bdevs_list": [ 00:08:57.585 { 00:08:57.585 "name": "BaseBdev1", 00:08:57.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.585 "is_configured": false, 00:08:57.585 "data_offset": 0, 00:08:57.585 "data_size": 0 00:08:57.585 }, 00:08:57.585 { 00:08:57.585 "name": null, 00:08:57.585 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:57.585 "is_configured": false, 00:08:57.585 "data_offset": 0, 00:08:57.585 "data_size": 65536 00:08:57.585 }, 00:08:57.585 { 00:08:57.585 "name": "BaseBdev3", 00:08:57.585 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:57.585 "is_configured": true, 00:08:57.585 "data_offset": 0, 00:08:57.585 "data_size": 65536 00:08:57.585 } 00:08:57.585 ] 00:08:57.585 }' 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.585 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 [2024-11-18 13:25:27.873198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.846 BaseBdev1 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.846 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.107 [ 00:08:58.107 { 00:08:58.107 "name": "BaseBdev1", 00:08:58.107 "aliases": [ 00:08:58.107 "885e611e-6a59-422e-84a2-ab26cc41a749" 00:08:58.107 ], 00:08:58.107 "product_name": "Malloc disk", 00:08:58.107 "block_size": 512, 00:08:58.107 "num_blocks": 65536, 00:08:58.107 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:08:58.107 "assigned_rate_limits": { 00:08:58.107 "rw_ios_per_sec": 0, 00:08:58.107 "rw_mbytes_per_sec": 0, 00:08:58.107 "r_mbytes_per_sec": 0, 00:08:58.107 "w_mbytes_per_sec": 0 00:08:58.107 }, 00:08:58.107 "claimed": true, 00:08:58.107 "claim_type": "exclusive_write", 00:08:58.107 "zoned": false, 00:08:58.107 "supported_io_types": { 00:08:58.107 "read": true, 00:08:58.107 "write": true, 00:08:58.107 "unmap": true, 00:08:58.107 "flush": true, 00:08:58.107 "reset": true, 00:08:58.107 "nvme_admin": false, 00:08:58.107 "nvme_io": false, 00:08:58.107 "nvme_io_md": false, 00:08:58.107 "write_zeroes": true, 00:08:58.107 "zcopy": true, 00:08:58.107 "get_zone_info": false, 00:08:58.107 "zone_management": false, 00:08:58.107 "zone_append": false, 00:08:58.107 "compare": false, 00:08:58.107 "compare_and_write": false, 00:08:58.107 "abort": true, 00:08:58.107 "seek_hole": false, 00:08:58.107 "seek_data": false, 00:08:58.107 "copy": true, 00:08:58.107 "nvme_iov_md": false 00:08:58.107 }, 00:08:58.107 "memory_domains": [ 00:08:58.107 { 00:08:58.107 "dma_device_id": "system", 00:08:58.107 "dma_device_type": 1 00:08:58.107 }, 00:08:58.107 { 00:08:58.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.107 "dma_device_type": 2 00:08:58.107 } 00:08:58.107 ], 00:08:58.107 "driver_specific": {} 00:08:58.107 } 00:08:58.107 ] 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.107 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.107 "name": "Existed_Raid", 00:08:58.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.107 "strip_size_kb": 64, 00:08:58.107 "state": "configuring", 00:08:58.107 "raid_level": "raid0", 00:08:58.107 "superblock": false, 00:08:58.107 "num_base_bdevs": 3, 00:08:58.107 "num_base_bdevs_discovered": 2, 00:08:58.107 "num_base_bdevs_operational": 3, 00:08:58.107 "base_bdevs_list": [ 00:08:58.107 { 00:08:58.107 "name": "BaseBdev1", 00:08:58.107 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:08:58.107 "is_configured": true, 00:08:58.107 "data_offset": 0, 00:08:58.107 "data_size": 65536 00:08:58.107 }, 00:08:58.107 { 00:08:58.107 "name": null, 00:08:58.107 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:58.107 "is_configured": false, 00:08:58.107 "data_offset": 0, 00:08:58.107 "data_size": 65536 00:08:58.107 }, 00:08:58.107 { 00:08:58.107 "name": "BaseBdev3", 00:08:58.108 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:58.108 "is_configured": true, 00:08:58.108 "data_offset": 0, 00:08:58.108 "data_size": 65536 00:08:58.108 } 00:08:58.108 ] 00:08:58.108 }' 00:08:58.108 13:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.108 13:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.382 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.382 [2024-11-18 13:25:28.432274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.642 "name": "Existed_Raid", 00:08:58.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.642 "strip_size_kb": 64, 00:08:58.642 "state": "configuring", 00:08:58.642 "raid_level": "raid0", 00:08:58.642 "superblock": false, 00:08:58.642 "num_base_bdevs": 3, 00:08:58.642 "num_base_bdevs_discovered": 1, 00:08:58.642 "num_base_bdevs_operational": 3, 00:08:58.642 "base_bdevs_list": [ 00:08:58.642 { 00:08:58.642 "name": "BaseBdev1", 00:08:58.642 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:08:58.642 "is_configured": true, 00:08:58.642 "data_offset": 0, 00:08:58.642 "data_size": 65536 00:08:58.642 }, 00:08:58.642 { 00:08:58.642 "name": null, 00:08:58.642 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:58.642 "is_configured": false, 00:08:58.642 "data_offset": 0, 00:08:58.642 "data_size": 65536 00:08:58.642 }, 00:08:58.642 { 00:08:58.642 "name": null, 00:08:58.642 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:58.642 "is_configured": false, 00:08:58.642 "data_offset": 0, 00:08:58.642 "data_size": 65536 00:08:58.642 } 00:08:58.642 ] 00:08:58.642 }' 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.642 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.902 [2024-11-18 13:25:28.931438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.902 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.162 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.162 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.162 "name": "Existed_Raid", 00:08:59.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.162 "strip_size_kb": 64, 00:08:59.162 "state": "configuring", 00:08:59.162 "raid_level": "raid0", 00:08:59.162 "superblock": false, 00:08:59.162 "num_base_bdevs": 3, 00:08:59.162 "num_base_bdevs_discovered": 2, 00:08:59.162 "num_base_bdevs_operational": 3, 00:08:59.162 "base_bdevs_list": [ 00:08:59.162 { 00:08:59.162 "name": "BaseBdev1", 00:08:59.162 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:08:59.162 "is_configured": true, 00:08:59.162 "data_offset": 0, 00:08:59.162 "data_size": 65536 00:08:59.162 }, 00:08:59.162 { 00:08:59.162 "name": null, 00:08:59.162 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:59.162 "is_configured": false, 00:08:59.162 "data_offset": 0, 00:08:59.162 "data_size": 65536 00:08:59.162 }, 00:08:59.162 { 00:08:59.162 "name": "BaseBdev3", 00:08:59.162 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:59.162 "is_configured": true, 00:08:59.162 "data_offset": 0, 00:08:59.162 "data_size": 65536 00:08:59.162 } 00:08:59.162 ] 00:08:59.162 }' 00:08:59.162 13:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.162 13:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.423 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.423 [2024-11-18 13:25:29.450592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.683 "name": "Existed_Raid", 00:08:59.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.683 "strip_size_kb": 64, 00:08:59.683 "state": "configuring", 00:08:59.683 "raid_level": "raid0", 00:08:59.683 "superblock": false, 00:08:59.683 "num_base_bdevs": 3, 00:08:59.683 "num_base_bdevs_discovered": 1, 00:08:59.683 "num_base_bdevs_operational": 3, 00:08:59.683 "base_bdevs_list": [ 00:08:59.683 { 00:08:59.683 "name": null, 00:08:59.683 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:08:59.683 "is_configured": false, 00:08:59.683 "data_offset": 0, 00:08:59.683 "data_size": 65536 00:08:59.683 }, 00:08:59.683 { 00:08:59.683 "name": null, 00:08:59.683 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:08:59.683 "is_configured": false, 00:08:59.683 "data_offset": 0, 00:08:59.683 "data_size": 65536 00:08:59.683 }, 00:08:59.683 { 00:08:59.683 "name": "BaseBdev3", 00:08:59.683 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:08:59.683 "is_configured": true, 00:08:59.683 "data_offset": 0, 00:08:59.683 "data_size": 65536 00:08:59.683 } 00:08:59.683 ] 00:08:59.683 }' 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.683 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.944 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.944 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.944 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.944 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.944 13:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.204 13:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.204 [2024-11-18 13:25:30.007808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.204 "name": "Existed_Raid", 00:09:00.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.204 "strip_size_kb": 64, 00:09:00.204 "state": "configuring", 00:09:00.204 "raid_level": "raid0", 00:09:00.204 "superblock": false, 00:09:00.204 "num_base_bdevs": 3, 00:09:00.204 "num_base_bdevs_discovered": 2, 00:09:00.204 "num_base_bdevs_operational": 3, 00:09:00.204 "base_bdevs_list": [ 00:09:00.204 { 00:09:00.204 "name": null, 00:09:00.204 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:09:00.204 "is_configured": false, 00:09:00.204 "data_offset": 0, 00:09:00.204 "data_size": 65536 00:09:00.204 }, 00:09:00.204 { 00:09:00.204 "name": "BaseBdev2", 00:09:00.204 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:09:00.204 "is_configured": true, 00:09:00.204 "data_offset": 0, 00:09:00.204 "data_size": 65536 00:09:00.204 }, 00:09:00.204 { 00:09:00.204 "name": "BaseBdev3", 00:09:00.204 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:09:00.204 "is_configured": true, 00:09:00.204 "data_offset": 0, 00:09:00.204 "data_size": 65536 00:09:00.204 } 00:09:00.204 ] 00:09:00.204 }' 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.204 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.465 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 885e611e-6a59-422e-84a2-ab26cc41a749 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.724 [2024-11-18 13:25:30.574157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.724 [2024-11-18 13:25:30.574329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.724 [2024-11-18 13:25:30.574346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:00.724 [2024-11-18 13:25:30.574616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:00.724 [2024-11-18 13:25:30.574784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.724 [2024-11-18 13:25:30.574794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:00.724 [2024-11-18 13:25:30.575083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.724 NewBaseBdev 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.724 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.725 [ 00:09:00.725 { 00:09:00.725 "name": "NewBaseBdev", 00:09:00.725 "aliases": [ 00:09:00.725 "885e611e-6a59-422e-84a2-ab26cc41a749" 00:09:00.725 ], 00:09:00.725 "product_name": "Malloc disk", 00:09:00.725 "block_size": 512, 00:09:00.725 "num_blocks": 65536, 00:09:00.725 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:09:00.725 "assigned_rate_limits": { 00:09:00.725 "rw_ios_per_sec": 0, 00:09:00.725 "rw_mbytes_per_sec": 0, 00:09:00.725 "r_mbytes_per_sec": 0, 00:09:00.725 "w_mbytes_per_sec": 0 00:09:00.725 }, 00:09:00.725 "claimed": true, 00:09:00.725 "claim_type": "exclusive_write", 00:09:00.725 "zoned": false, 00:09:00.725 "supported_io_types": { 00:09:00.725 "read": true, 00:09:00.725 "write": true, 00:09:00.725 "unmap": true, 00:09:00.725 "flush": true, 00:09:00.725 "reset": true, 00:09:00.725 "nvme_admin": false, 00:09:00.725 "nvme_io": false, 00:09:00.725 "nvme_io_md": false, 00:09:00.725 "write_zeroes": true, 00:09:00.725 "zcopy": true, 00:09:00.725 "get_zone_info": false, 00:09:00.725 "zone_management": false, 00:09:00.725 "zone_append": false, 00:09:00.725 "compare": false, 00:09:00.725 "compare_and_write": false, 00:09:00.725 "abort": true, 00:09:00.725 "seek_hole": false, 00:09:00.725 "seek_data": false, 00:09:00.725 "copy": true, 00:09:00.725 "nvme_iov_md": false 00:09:00.725 }, 00:09:00.725 "memory_domains": [ 00:09:00.725 { 00:09:00.725 "dma_device_id": "system", 00:09:00.725 "dma_device_type": 1 00:09:00.725 }, 00:09:00.725 { 00:09:00.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.725 "dma_device_type": 2 00:09:00.725 } 00:09:00.725 ], 00:09:00.725 "driver_specific": {} 00:09:00.725 } 00:09:00.725 ] 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.725 "name": "Existed_Raid", 00:09:00.725 "uuid": "eca3f384-3456-4b25-ab13-2bdc7af7b063", 00:09:00.725 "strip_size_kb": 64, 00:09:00.725 "state": "online", 00:09:00.725 "raid_level": "raid0", 00:09:00.725 "superblock": false, 00:09:00.725 "num_base_bdevs": 3, 00:09:00.725 "num_base_bdevs_discovered": 3, 00:09:00.725 "num_base_bdevs_operational": 3, 00:09:00.725 "base_bdevs_list": [ 00:09:00.725 { 00:09:00.725 "name": "NewBaseBdev", 00:09:00.725 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:09:00.725 "is_configured": true, 00:09:00.725 "data_offset": 0, 00:09:00.725 "data_size": 65536 00:09:00.725 }, 00:09:00.725 { 00:09:00.725 "name": "BaseBdev2", 00:09:00.725 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:09:00.725 "is_configured": true, 00:09:00.725 "data_offset": 0, 00:09:00.725 "data_size": 65536 00:09:00.725 }, 00:09:00.725 { 00:09:00.725 "name": "BaseBdev3", 00:09:00.725 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:09:00.725 "is_configured": true, 00:09:00.725 "data_offset": 0, 00:09:00.725 "data_size": 65536 00:09:00.725 } 00:09:00.725 ] 00:09:00.725 }' 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.725 13:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.295 [2024-11-18 13:25:31.069621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.295 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.295 "name": "Existed_Raid", 00:09:01.295 "aliases": [ 00:09:01.295 "eca3f384-3456-4b25-ab13-2bdc7af7b063" 00:09:01.295 ], 00:09:01.295 "product_name": "Raid Volume", 00:09:01.295 "block_size": 512, 00:09:01.295 "num_blocks": 196608, 00:09:01.295 "uuid": "eca3f384-3456-4b25-ab13-2bdc7af7b063", 00:09:01.295 "assigned_rate_limits": { 00:09:01.295 "rw_ios_per_sec": 0, 00:09:01.295 "rw_mbytes_per_sec": 0, 00:09:01.295 "r_mbytes_per_sec": 0, 00:09:01.295 "w_mbytes_per_sec": 0 00:09:01.295 }, 00:09:01.295 "claimed": false, 00:09:01.295 "zoned": false, 00:09:01.295 "supported_io_types": { 00:09:01.295 "read": true, 00:09:01.295 "write": true, 00:09:01.295 "unmap": true, 00:09:01.295 "flush": true, 00:09:01.295 "reset": true, 00:09:01.295 "nvme_admin": false, 00:09:01.295 "nvme_io": false, 00:09:01.295 "nvme_io_md": false, 00:09:01.295 "write_zeroes": true, 00:09:01.295 "zcopy": false, 00:09:01.295 "get_zone_info": false, 00:09:01.295 "zone_management": false, 00:09:01.295 "zone_append": false, 00:09:01.295 "compare": false, 00:09:01.295 "compare_and_write": false, 00:09:01.295 "abort": false, 00:09:01.295 "seek_hole": false, 00:09:01.295 "seek_data": false, 00:09:01.295 "copy": false, 00:09:01.295 "nvme_iov_md": false 00:09:01.295 }, 00:09:01.295 "memory_domains": [ 00:09:01.295 { 00:09:01.295 "dma_device_id": "system", 00:09:01.295 "dma_device_type": 1 00:09:01.295 }, 00:09:01.295 { 00:09:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.295 "dma_device_type": 2 00:09:01.295 }, 00:09:01.295 { 00:09:01.295 "dma_device_id": "system", 00:09:01.295 "dma_device_type": 1 00:09:01.295 }, 00:09:01.295 { 00:09:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.295 "dma_device_type": 2 00:09:01.295 }, 00:09:01.295 { 00:09:01.295 "dma_device_id": "system", 00:09:01.295 "dma_device_type": 1 00:09:01.295 }, 00:09:01.295 { 00:09:01.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.295 "dma_device_type": 2 00:09:01.295 } 00:09:01.295 ], 00:09:01.295 "driver_specific": { 00:09:01.295 "raid": { 00:09:01.295 "uuid": "eca3f384-3456-4b25-ab13-2bdc7af7b063", 00:09:01.295 "strip_size_kb": 64, 00:09:01.295 "state": "online", 00:09:01.295 "raid_level": "raid0", 00:09:01.296 "superblock": false, 00:09:01.296 "num_base_bdevs": 3, 00:09:01.296 "num_base_bdevs_discovered": 3, 00:09:01.296 "num_base_bdevs_operational": 3, 00:09:01.296 "base_bdevs_list": [ 00:09:01.296 { 00:09:01.296 "name": "NewBaseBdev", 00:09:01.296 "uuid": "885e611e-6a59-422e-84a2-ab26cc41a749", 00:09:01.296 "is_configured": true, 00:09:01.296 "data_offset": 0, 00:09:01.296 "data_size": 65536 00:09:01.296 }, 00:09:01.296 { 00:09:01.296 "name": "BaseBdev2", 00:09:01.296 "uuid": "469d9ad9-c180-47eb-afa4-ed5114fb43aa", 00:09:01.296 "is_configured": true, 00:09:01.296 "data_offset": 0, 00:09:01.296 "data_size": 65536 00:09:01.296 }, 00:09:01.296 { 00:09:01.296 "name": "BaseBdev3", 00:09:01.296 "uuid": "924465ec-22d1-449f-b1c4-532d0ed1be8a", 00:09:01.296 "is_configured": true, 00:09:01.296 "data_offset": 0, 00:09:01.296 "data_size": 65536 00:09:01.296 } 00:09:01.296 ] 00:09:01.296 } 00:09:01.296 } 00:09:01.296 }' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:01.296 BaseBdev2 00:09:01.296 BaseBdev3' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.296 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.296 [2024-11-18 13:25:31.344869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.296 [2024-11-18 13:25:31.344922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.296 [2024-11-18 13:25:31.345017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.296 [2024-11-18 13:25:31.345073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.296 [2024-11-18 13:25:31.345086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63844 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63844 ']' 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63844 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63844 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63844' 00:09:01.556 killing process with pid 63844 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63844 00:09:01.556 [2024-11-18 13:25:31.397402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.556 13:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63844 00:09:01.816 [2024-11-18 13:25:31.702573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.221 00:09:03.221 real 0m10.472s 00:09:03.221 user 0m16.622s 00:09:03.221 sys 0m1.842s 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.221 ************************************ 00:09:03.221 END TEST raid_state_function_test 00:09:03.221 ************************************ 00:09:03.221 13:25:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:03.221 13:25:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.221 13:25:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.221 13:25:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.221 ************************************ 00:09:03.221 START TEST raid_state_function_test_sb 00:09:03.221 ************************************ 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64471 00:09:03.221 Process raid pid: 64471 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64471' 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64471 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64471 ']' 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.221 13:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.221 [2024-11-18 13:25:32.988415] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:03.221 [2024-11-18 13:25:32.988619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.221 [2024-11-18 13:25:33.163958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.481 [2024-11-18 13:25:33.274601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.481 [2024-11-18 13:25:33.478334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.481 [2024-11-18 13:25:33.478467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.051 [2024-11-18 13:25:33.888324] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.051 [2024-11-18 13:25:33.888379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.051 [2024-11-18 13:25:33.888390] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.051 [2024-11-18 13:25:33.888399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.051 [2024-11-18 13:25:33.888406] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.051 [2024-11-18 13:25:33.888415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.051 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.052 "name": "Existed_Raid", 00:09:04.052 "uuid": "07ba6d4b-a931-4396-a9d5-1f3c340a7305", 00:09:04.052 "strip_size_kb": 64, 00:09:04.052 "state": "configuring", 00:09:04.052 "raid_level": "raid0", 00:09:04.052 "superblock": true, 00:09:04.052 "num_base_bdevs": 3, 00:09:04.052 "num_base_bdevs_discovered": 0, 00:09:04.052 "num_base_bdevs_operational": 3, 00:09:04.052 "base_bdevs_list": [ 00:09:04.052 { 00:09:04.052 "name": "BaseBdev1", 00:09:04.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.052 "is_configured": false, 00:09:04.052 "data_offset": 0, 00:09:04.052 "data_size": 0 00:09:04.052 }, 00:09:04.052 { 00:09:04.052 "name": "BaseBdev2", 00:09:04.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.052 "is_configured": false, 00:09:04.052 "data_offset": 0, 00:09:04.052 "data_size": 0 00:09:04.052 }, 00:09:04.052 { 00:09:04.052 "name": "BaseBdev3", 00:09:04.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.052 "is_configured": false, 00:09:04.052 "data_offset": 0, 00:09:04.052 "data_size": 0 00:09:04.052 } 00:09:04.052 ] 00:09:04.052 }' 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.052 13:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 [2024-11-18 13:25:34.387460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.623 [2024-11-18 13:25:34.387597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 [2024-11-18 13:25:34.399476] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.623 [2024-11-18 13:25:34.399622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.623 [2024-11-18 13:25:34.399650] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.623 [2024-11-18 13:25:34.399673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.623 [2024-11-18 13:25:34.399691] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.623 [2024-11-18 13:25:34.399712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 [2024-11-18 13:25:34.447578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.623 BaseBdev1 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.623 [ 00:09:04.623 { 00:09:04.623 "name": "BaseBdev1", 00:09:04.623 "aliases": [ 00:09:04.623 "d85b3b05-6afd-4b91-85f8-0f7d58eae169" 00:09:04.623 ], 00:09:04.623 "product_name": "Malloc disk", 00:09:04.623 "block_size": 512, 00:09:04.623 "num_blocks": 65536, 00:09:04.623 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:04.623 "assigned_rate_limits": { 00:09:04.623 "rw_ios_per_sec": 0, 00:09:04.623 "rw_mbytes_per_sec": 0, 00:09:04.623 "r_mbytes_per_sec": 0, 00:09:04.623 "w_mbytes_per_sec": 0 00:09:04.623 }, 00:09:04.623 "claimed": true, 00:09:04.623 "claim_type": "exclusive_write", 00:09:04.623 "zoned": false, 00:09:04.623 "supported_io_types": { 00:09:04.623 "read": true, 00:09:04.623 "write": true, 00:09:04.623 "unmap": true, 00:09:04.623 "flush": true, 00:09:04.623 "reset": true, 00:09:04.623 "nvme_admin": false, 00:09:04.623 "nvme_io": false, 00:09:04.623 "nvme_io_md": false, 00:09:04.623 "write_zeroes": true, 00:09:04.623 "zcopy": true, 00:09:04.623 "get_zone_info": false, 00:09:04.623 "zone_management": false, 00:09:04.623 "zone_append": false, 00:09:04.623 "compare": false, 00:09:04.623 "compare_and_write": false, 00:09:04.623 "abort": true, 00:09:04.623 "seek_hole": false, 00:09:04.623 "seek_data": false, 00:09:04.623 "copy": true, 00:09:04.623 "nvme_iov_md": false 00:09:04.623 }, 00:09:04.623 "memory_domains": [ 00:09:04.623 { 00:09:04.623 "dma_device_id": "system", 00:09:04.623 "dma_device_type": 1 00:09:04.623 }, 00:09:04.623 { 00:09:04.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.623 "dma_device_type": 2 00:09:04.623 } 00:09:04.623 ], 00:09:04.623 "driver_specific": {} 00:09:04.623 } 00:09:04.623 ] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.623 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.624 "name": "Existed_Raid", 00:09:04.624 "uuid": "7fe40e9e-2431-4e5d-94c8-97d5d12ba724", 00:09:04.624 "strip_size_kb": 64, 00:09:04.624 "state": "configuring", 00:09:04.624 "raid_level": "raid0", 00:09:04.624 "superblock": true, 00:09:04.624 "num_base_bdevs": 3, 00:09:04.624 "num_base_bdevs_discovered": 1, 00:09:04.624 "num_base_bdevs_operational": 3, 00:09:04.624 "base_bdevs_list": [ 00:09:04.624 { 00:09:04.624 "name": "BaseBdev1", 00:09:04.624 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:04.624 "is_configured": true, 00:09:04.624 "data_offset": 2048, 00:09:04.624 "data_size": 63488 00:09:04.624 }, 00:09:04.624 { 00:09:04.624 "name": "BaseBdev2", 00:09:04.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.624 "is_configured": false, 00:09:04.624 "data_offset": 0, 00:09:04.624 "data_size": 0 00:09:04.624 }, 00:09:04.624 { 00:09:04.624 "name": "BaseBdev3", 00:09:04.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.624 "is_configured": false, 00:09:04.624 "data_offset": 0, 00:09:04.624 "data_size": 0 00:09:04.624 } 00:09:04.624 ] 00:09:04.624 }' 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.624 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.883 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.883 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.883 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.883 [2024-11-18 13:25:34.934781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.143 [2024-11-18 13:25:34.934908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.143 [2024-11-18 13:25:34.946802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.143 [2024-11-18 13:25:34.948557] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.143 [2024-11-18 13:25:34.948599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.143 [2024-11-18 13:25:34.948608] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.143 [2024-11-18 13:25:34.948617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.143 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.143 "name": "Existed_Raid", 00:09:05.143 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:05.143 "strip_size_kb": 64, 00:09:05.143 "state": "configuring", 00:09:05.143 "raid_level": "raid0", 00:09:05.143 "superblock": true, 00:09:05.143 "num_base_bdevs": 3, 00:09:05.143 "num_base_bdevs_discovered": 1, 00:09:05.143 "num_base_bdevs_operational": 3, 00:09:05.143 "base_bdevs_list": [ 00:09:05.143 { 00:09:05.143 "name": "BaseBdev1", 00:09:05.144 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:05.144 "is_configured": true, 00:09:05.144 "data_offset": 2048, 00:09:05.144 "data_size": 63488 00:09:05.144 }, 00:09:05.144 { 00:09:05.144 "name": "BaseBdev2", 00:09:05.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.144 "is_configured": false, 00:09:05.144 "data_offset": 0, 00:09:05.144 "data_size": 0 00:09:05.144 }, 00:09:05.144 { 00:09:05.144 "name": "BaseBdev3", 00:09:05.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.144 "is_configured": false, 00:09:05.144 "data_offset": 0, 00:09:05.144 "data_size": 0 00:09:05.144 } 00:09:05.144 ] 00:09:05.144 }' 00:09:05.144 13:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.144 13:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.413 [2024-11-18 13:25:35.381847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.413 BaseBdev2 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.413 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.413 [ 00:09:05.413 { 00:09:05.413 "name": "BaseBdev2", 00:09:05.413 "aliases": [ 00:09:05.413 "218bb553-732a-467f-8a59-9e2c766e8c37" 00:09:05.413 ], 00:09:05.413 "product_name": "Malloc disk", 00:09:05.413 "block_size": 512, 00:09:05.413 "num_blocks": 65536, 00:09:05.413 "uuid": "218bb553-732a-467f-8a59-9e2c766e8c37", 00:09:05.413 "assigned_rate_limits": { 00:09:05.413 "rw_ios_per_sec": 0, 00:09:05.413 "rw_mbytes_per_sec": 0, 00:09:05.413 "r_mbytes_per_sec": 0, 00:09:05.413 "w_mbytes_per_sec": 0 00:09:05.413 }, 00:09:05.413 "claimed": true, 00:09:05.413 "claim_type": "exclusive_write", 00:09:05.413 "zoned": false, 00:09:05.413 "supported_io_types": { 00:09:05.413 "read": true, 00:09:05.413 "write": true, 00:09:05.413 "unmap": true, 00:09:05.413 "flush": true, 00:09:05.413 "reset": true, 00:09:05.413 "nvme_admin": false, 00:09:05.413 "nvme_io": false, 00:09:05.413 "nvme_io_md": false, 00:09:05.413 "write_zeroes": true, 00:09:05.413 "zcopy": true, 00:09:05.413 "get_zone_info": false, 00:09:05.413 "zone_management": false, 00:09:05.413 "zone_append": false, 00:09:05.413 "compare": false, 00:09:05.413 "compare_and_write": false, 00:09:05.413 "abort": true, 00:09:05.413 "seek_hole": false, 00:09:05.414 "seek_data": false, 00:09:05.414 "copy": true, 00:09:05.414 "nvme_iov_md": false 00:09:05.414 }, 00:09:05.414 "memory_domains": [ 00:09:05.414 { 00:09:05.414 "dma_device_id": "system", 00:09:05.414 "dma_device_type": 1 00:09:05.414 }, 00:09:05.414 { 00:09:05.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.414 "dma_device_type": 2 00:09:05.414 } 00:09:05.414 ], 00:09:05.414 "driver_specific": {} 00:09:05.414 } 00:09:05.414 ] 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.414 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.674 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.674 "name": "Existed_Raid", 00:09:05.674 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:05.674 "strip_size_kb": 64, 00:09:05.674 "state": "configuring", 00:09:05.674 "raid_level": "raid0", 00:09:05.674 "superblock": true, 00:09:05.674 "num_base_bdevs": 3, 00:09:05.674 "num_base_bdevs_discovered": 2, 00:09:05.674 "num_base_bdevs_operational": 3, 00:09:05.674 "base_bdevs_list": [ 00:09:05.674 { 00:09:05.674 "name": "BaseBdev1", 00:09:05.674 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:05.674 "is_configured": true, 00:09:05.674 "data_offset": 2048, 00:09:05.674 "data_size": 63488 00:09:05.674 }, 00:09:05.674 { 00:09:05.674 "name": "BaseBdev2", 00:09:05.674 "uuid": "218bb553-732a-467f-8a59-9e2c766e8c37", 00:09:05.674 "is_configured": true, 00:09:05.674 "data_offset": 2048, 00:09:05.674 "data_size": 63488 00:09:05.674 }, 00:09:05.674 { 00:09:05.674 "name": "BaseBdev3", 00:09:05.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.674 "is_configured": false, 00:09:05.674 "data_offset": 0, 00:09:05.674 "data_size": 0 00:09:05.674 } 00:09:05.674 ] 00:09:05.675 }' 00:09:05.675 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.675 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 [2024-11-18 13:25:35.881013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.949 [2024-11-18 13:25:35.881366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.949 [2024-11-18 13:25:35.881427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.949 [2024-11-18 13:25:35.881710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.949 [2024-11-18 13:25:35.881887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.949 [2024-11-18 13:25:35.881925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:05.949 BaseBdev3 00:09:05.949 [2024-11-18 13:25:35.882102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 [ 00:09:05.949 { 00:09:05.949 "name": "BaseBdev3", 00:09:05.949 "aliases": [ 00:09:05.949 "1d4c83f9-c43c-4bf4-b759-cc17005eb0be" 00:09:05.949 ], 00:09:05.949 "product_name": "Malloc disk", 00:09:05.949 "block_size": 512, 00:09:05.949 "num_blocks": 65536, 00:09:05.949 "uuid": "1d4c83f9-c43c-4bf4-b759-cc17005eb0be", 00:09:05.949 "assigned_rate_limits": { 00:09:05.949 "rw_ios_per_sec": 0, 00:09:05.949 "rw_mbytes_per_sec": 0, 00:09:05.949 "r_mbytes_per_sec": 0, 00:09:05.949 "w_mbytes_per_sec": 0 00:09:05.949 }, 00:09:05.949 "claimed": true, 00:09:05.949 "claim_type": "exclusive_write", 00:09:05.949 "zoned": false, 00:09:05.949 "supported_io_types": { 00:09:05.949 "read": true, 00:09:05.949 "write": true, 00:09:05.949 "unmap": true, 00:09:05.949 "flush": true, 00:09:05.949 "reset": true, 00:09:05.949 "nvme_admin": false, 00:09:05.949 "nvme_io": false, 00:09:05.949 "nvme_io_md": false, 00:09:05.949 "write_zeroes": true, 00:09:05.949 "zcopy": true, 00:09:05.949 "get_zone_info": false, 00:09:05.949 "zone_management": false, 00:09:05.949 "zone_append": false, 00:09:05.949 "compare": false, 00:09:05.949 "compare_and_write": false, 00:09:05.949 "abort": true, 00:09:05.949 "seek_hole": false, 00:09:05.949 "seek_data": false, 00:09:05.949 "copy": true, 00:09:05.949 "nvme_iov_md": false 00:09:05.949 }, 00:09:05.949 "memory_domains": [ 00:09:05.949 { 00:09:05.949 "dma_device_id": "system", 00:09:05.949 "dma_device_type": 1 00:09:05.949 }, 00:09:05.949 { 00:09:05.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.949 "dma_device_type": 2 00:09:05.949 } 00:09:05.949 ], 00:09:05.949 "driver_specific": {} 00:09:05.949 } 00:09:05.949 ] 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.950 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.950 "name": "Existed_Raid", 00:09:05.950 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:05.950 "strip_size_kb": 64, 00:09:05.950 "state": "online", 00:09:05.950 "raid_level": "raid0", 00:09:05.950 "superblock": true, 00:09:05.950 "num_base_bdevs": 3, 00:09:05.950 "num_base_bdevs_discovered": 3, 00:09:05.950 "num_base_bdevs_operational": 3, 00:09:05.950 "base_bdevs_list": [ 00:09:05.950 { 00:09:05.950 "name": "BaseBdev1", 00:09:05.950 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:05.950 "is_configured": true, 00:09:05.950 "data_offset": 2048, 00:09:05.950 "data_size": 63488 00:09:05.950 }, 00:09:05.950 { 00:09:05.950 "name": "BaseBdev2", 00:09:05.950 "uuid": "218bb553-732a-467f-8a59-9e2c766e8c37", 00:09:05.950 "is_configured": true, 00:09:05.950 "data_offset": 2048, 00:09:05.950 "data_size": 63488 00:09:05.950 }, 00:09:05.950 { 00:09:05.950 "name": "BaseBdev3", 00:09:05.950 "uuid": "1d4c83f9-c43c-4bf4-b759-cc17005eb0be", 00:09:05.950 "is_configured": true, 00:09:05.950 "data_offset": 2048, 00:09:05.950 "data_size": 63488 00:09:05.950 } 00:09:05.950 ] 00:09:05.950 }' 00:09:05.950 13:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.950 13:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.522 [2024-11-18 13:25:36.336589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.522 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.522 "name": "Existed_Raid", 00:09:06.522 "aliases": [ 00:09:06.522 "00a57159-7668-4d4b-8bcd-0a688501d2fb" 00:09:06.522 ], 00:09:06.522 "product_name": "Raid Volume", 00:09:06.522 "block_size": 512, 00:09:06.522 "num_blocks": 190464, 00:09:06.522 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:06.522 "assigned_rate_limits": { 00:09:06.522 "rw_ios_per_sec": 0, 00:09:06.522 "rw_mbytes_per_sec": 0, 00:09:06.522 "r_mbytes_per_sec": 0, 00:09:06.522 "w_mbytes_per_sec": 0 00:09:06.522 }, 00:09:06.522 "claimed": false, 00:09:06.522 "zoned": false, 00:09:06.522 "supported_io_types": { 00:09:06.522 "read": true, 00:09:06.522 "write": true, 00:09:06.522 "unmap": true, 00:09:06.522 "flush": true, 00:09:06.522 "reset": true, 00:09:06.522 "nvme_admin": false, 00:09:06.523 "nvme_io": false, 00:09:06.523 "nvme_io_md": false, 00:09:06.523 "write_zeroes": true, 00:09:06.523 "zcopy": false, 00:09:06.523 "get_zone_info": false, 00:09:06.523 "zone_management": false, 00:09:06.523 "zone_append": false, 00:09:06.523 "compare": false, 00:09:06.523 "compare_and_write": false, 00:09:06.523 "abort": false, 00:09:06.523 "seek_hole": false, 00:09:06.523 "seek_data": false, 00:09:06.523 "copy": false, 00:09:06.523 "nvme_iov_md": false 00:09:06.523 }, 00:09:06.523 "memory_domains": [ 00:09:06.523 { 00:09:06.523 "dma_device_id": "system", 00:09:06.523 "dma_device_type": 1 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.523 "dma_device_type": 2 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "dma_device_id": "system", 00:09:06.523 "dma_device_type": 1 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.523 "dma_device_type": 2 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "dma_device_id": "system", 00:09:06.523 "dma_device_type": 1 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.523 "dma_device_type": 2 00:09:06.523 } 00:09:06.523 ], 00:09:06.523 "driver_specific": { 00:09:06.523 "raid": { 00:09:06.523 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:06.523 "strip_size_kb": 64, 00:09:06.523 "state": "online", 00:09:06.523 "raid_level": "raid0", 00:09:06.523 "superblock": true, 00:09:06.523 "num_base_bdevs": 3, 00:09:06.523 "num_base_bdevs_discovered": 3, 00:09:06.523 "num_base_bdevs_operational": 3, 00:09:06.523 "base_bdevs_list": [ 00:09:06.523 { 00:09:06.523 "name": "BaseBdev1", 00:09:06.523 "uuid": "d85b3b05-6afd-4b91-85f8-0f7d58eae169", 00:09:06.523 "is_configured": true, 00:09:06.523 "data_offset": 2048, 00:09:06.523 "data_size": 63488 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "name": "BaseBdev2", 00:09:06.523 "uuid": "218bb553-732a-467f-8a59-9e2c766e8c37", 00:09:06.523 "is_configured": true, 00:09:06.523 "data_offset": 2048, 00:09:06.523 "data_size": 63488 00:09:06.523 }, 00:09:06.523 { 00:09:06.523 "name": "BaseBdev3", 00:09:06.523 "uuid": "1d4c83f9-c43c-4bf4-b759-cc17005eb0be", 00:09:06.523 "is_configured": true, 00:09:06.523 "data_offset": 2048, 00:09:06.523 "data_size": 63488 00:09:06.523 } 00:09:06.523 ] 00:09:06.523 } 00:09:06.523 } 00:09:06.523 }' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.523 BaseBdev2 00:09:06.523 BaseBdev3' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.523 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.783 [2024-11-18 13:25:36.579942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.783 [2024-11-18 13:25:36.580042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.783 [2024-11-18 13:25:36.580122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.783 "name": "Existed_Raid", 00:09:06.783 "uuid": "00a57159-7668-4d4b-8bcd-0a688501d2fb", 00:09:06.783 "strip_size_kb": 64, 00:09:06.783 "state": "offline", 00:09:06.783 "raid_level": "raid0", 00:09:06.783 "superblock": true, 00:09:06.783 "num_base_bdevs": 3, 00:09:06.783 "num_base_bdevs_discovered": 2, 00:09:06.783 "num_base_bdevs_operational": 2, 00:09:06.783 "base_bdevs_list": [ 00:09:06.783 { 00:09:06.783 "name": null, 00:09:06.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.783 "is_configured": false, 00:09:06.783 "data_offset": 0, 00:09:06.783 "data_size": 63488 00:09:06.783 }, 00:09:06.783 { 00:09:06.783 "name": "BaseBdev2", 00:09:06.783 "uuid": "218bb553-732a-467f-8a59-9e2c766e8c37", 00:09:06.783 "is_configured": true, 00:09:06.783 "data_offset": 2048, 00:09:06.783 "data_size": 63488 00:09:06.783 }, 00:09:06.783 { 00:09:06.783 "name": "BaseBdev3", 00:09:06.783 "uuid": "1d4c83f9-c43c-4bf4-b759-cc17005eb0be", 00:09:06.783 "is_configured": true, 00:09:06.783 "data_offset": 2048, 00:09:06.783 "data_size": 63488 00:09:06.783 } 00:09:06.783 ] 00:09:06.783 }' 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.783 13:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.353 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.354 [2024-11-18 13:25:37.217364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.354 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.354 [2024-11-18 13:25:37.369725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.354 [2024-11-18 13:25:37.369830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 BaseBdev2 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 [ 00:09:07.616 { 00:09:07.616 "name": "BaseBdev2", 00:09:07.616 "aliases": [ 00:09:07.616 "0653122a-13a7-47f1-a593-93896bcd5720" 00:09:07.616 ], 00:09:07.616 "product_name": "Malloc disk", 00:09:07.616 "block_size": 512, 00:09:07.616 "num_blocks": 65536, 00:09:07.616 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:07.616 "assigned_rate_limits": { 00:09:07.616 "rw_ios_per_sec": 0, 00:09:07.616 "rw_mbytes_per_sec": 0, 00:09:07.616 "r_mbytes_per_sec": 0, 00:09:07.616 "w_mbytes_per_sec": 0 00:09:07.616 }, 00:09:07.616 "claimed": false, 00:09:07.616 "zoned": false, 00:09:07.616 "supported_io_types": { 00:09:07.616 "read": true, 00:09:07.616 "write": true, 00:09:07.616 "unmap": true, 00:09:07.616 "flush": true, 00:09:07.616 "reset": true, 00:09:07.616 "nvme_admin": false, 00:09:07.616 "nvme_io": false, 00:09:07.616 "nvme_io_md": false, 00:09:07.616 "write_zeroes": true, 00:09:07.616 "zcopy": true, 00:09:07.616 "get_zone_info": false, 00:09:07.616 "zone_management": false, 00:09:07.616 "zone_append": false, 00:09:07.616 "compare": false, 00:09:07.616 "compare_and_write": false, 00:09:07.616 "abort": true, 00:09:07.616 "seek_hole": false, 00:09:07.616 "seek_data": false, 00:09:07.616 "copy": true, 00:09:07.616 "nvme_iov_md": false 00:09:07.616 }, 00:09:07.616 "memory_domains": [ 00:09:07.616 { 00:09:07.616 "dma_device_id": "system", 00:09:07.616 "dma_device_type": 1 00:09:07.616 }, 00:09:07.616 { 00:09:07.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.616 "dma_device_type": 2 00:09:07.616 } 00:09:07.616 ], 00:09:07.616 "driver_specific": {} 00:09:07.616 } 00:09:07.616 ] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 BaseBdev3 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.616 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.878 [ 00:09:07.878 { 00:09:07.878 "name": "BaseBdev3", 00:09:07.878 "aliases": [ 00:09:07.878 "8291c0dd-2531-48d6-94c0-1eed8b958af3" 00:09:07.878 ], 00:09:07.878 "product_name": "Malloc disk", 00:09:07.878 "block_size": 512, 00:09:07.878 "num_blocks": 65536, 00:09:07.878 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:07.878 "assigned_rate_limits": { 00:09:07.878 "rw_ios_per_sec": 0, 00:09:07.878 "rw_mbytes_per_sec": 0, 00:09:07.878 "r_mbytes_per_sec": 0, 00:09:07.878 "w_mbytes_per_sec": 0 00:09:07.878 }, 00:09:07.878 "claimed": false, 00:09:07.878 "zoned": false, 00:09:07.878 "supported_io_types": { 00:09:07.878 "read": true, 00:09:07.878 "write": true, 00:09:07.878 "unmap": true, 00:09:07.878 "flush": true, 00:09:07.878 "reset": true, 00:09:07.878 "nvme_admin": false, 00:09:07.878 "nvme_io": false, 00:09:07.878 "nvme_io_md": false, 00:09:07.878 "write_zeroes": true, 00:09:07.878 "zcopy": true, 00:09:07.878 "get_zone_info": false, 00:09:07.878 "zone_management": false, 00:09:07.878 "zone_append": false, 00:09:07.878 "compare": false, 00:09:07.878 "compare_and_write": false, 00:09:07.878 "abort": true, 00:09:07.878 "seek_hole": false, 00:09:07.878 "seek_data": false, 00:09:07.878 "copy": true, 00:09:07.878 "nvme_iov_md": false 00:09:07.878 }, 00:09:07.878 "memory_domains": [ 00:09:07.878 { 00:09:07.878 "dma_device_id": "system", 00:09:07.878 "dma_device_type": 1 00:09:07.878 }, 00:09:07.878 { 00:09:07.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.878 "dma_device_type": 2 00:09:07.878 } 00:09:07.878 ], 00:09:07.878 "driver_specific": {} 00:09:07.878 } 00:09:07.878 ] 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.878 [2024-11-18 13:25:37.691094] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.878 [2024-11-18 13:25:37.691276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.878 [2024-11-18 13:25:37.691327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.878 [2024-11-18 13:25:37.693223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.878 "name": "Existed_Raid", 00:09:07.878 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:07.878 "strip_size_kb": 64, 00:09:07.878 "state": "configuring", 00:09:07.878 "raid_level": "raid0", 00:09:07.878 "superblock": true, 00:09:07.878 "num_base_bdevs": 3, 00:09:07.878 "num_base_bdevs_discovered": 2, 00:09:07.878 "num_base_bdevs_operational": 3, 00:09:07.878 "base_bdevs_list": [ 00:09:07.878 { 00:09:07.878 "name": "BaseBdev1", 00:09:07.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.878 "is_configured": false, 00:09:07.878 "data_offset": 0, 00:09:07.878 "data_size": 0 00:09:07.878 }, 00:09:07.878 { 00:09:07.878 "name": "BaseBdev2", 00:09:07.878 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:07.878 "is_configured": true, 00:09:07.878 "data_offset": 2048, 00:09:07.878 "data_size": 63488 00:09:07.878 }, 00:09:07.878 { 00:09:07.878 "name": "BaseBdev3", 00:09:07.878 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:07.878 "is_configured": true, 00:09:07.878 "data_offset": 2048, 00:09:07.878 "data_size": 63488 00:09:07.878 } 00:09:07.878 ] 00:09:07.878 }' 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.878 13:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.138 [2024-11-18 13:25:38.138439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.138 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.399 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.399 "name": "Existed_Raid", 00:09:08.399 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:08.399 "strip_size_kb": 64, 00:09:08.399 "state": "configuring", 00:09:08.399 "raid_level": "raid0", 00:09:08.399 "superblock": true, 00:09:08.399 "num_base_bdevs": 3, 00:09:08.399 "num_base_bdevs_discovered": 1, 00:09:08.399 "num_base_bdevs_operational": 3, 00:09:08.399 "base_bdevs_list": [ 00:09:08.399 { 00:09:08.399 "name": "BaseBdev1", 00:09:08.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.399 "is_configured": false, 00:09:08.399 "data_offset": 0, 00:09:08.399 "data_size": 0 00:09:08.399 }, 00:09:08.399 { 00:09:08.399 "name": null, 00:09:08.399 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:08.399 "is_configured": false, 00:09:08.399 "data_offset": 0, 00:09:08.399 "data_size": 63488 00:09:08.399 }, 00:09:08.399 { 00:09:08.399 "name": "BaseBdev3", 00:09:08.399 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:08.399 "is_configured": true, 00:09:08.399 "data_offset": 2048, 00:09:08.399 "data_size": 63488 00:09:08.399 } 00:09:08.399 ] 00:09:08.399 }' 00:09:08.399 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.399 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.660 [2024-11-18 13:25:38.658793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.660 BaseBdev1 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.660 [ 00:09:08.660 { 00:09:08.660 "name": "BaseBdev1", 00:09:08.660 "aliases": [ 00:09:08.660 "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b" 00:09:08.660 ], 00:09:08.660 "product_name": "Malloc disk", 00:09:08.660 "block_size": 512, 00:09:08.660 "num_blocks": 65536, 00:09:08.660 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:08.660 "assigned_rate_limits": { 00:09:08.660 "rw_ios_per_sec": 0, 00:09:08.660 "rw_mbytes_per_sec": 0, 00:09:08.660 "r_mbytes_per_sec": 0, 00:09:08.660 "w_mbytes_per_sec": 0 00:09:08.660 }, 00:09:08.660 "claimed": true, 00:09:08.660 "claim_type": "exclusive_write", 00:09:08.660 "zoned": false, 00:09:08.660 "supported_io_types": { 00:09:08.660 "read": true, 00:09:08.660 "write": true, 00:09:08.660 "unmap": true, 00:09:08.660 "flush": true, 00:09:08.660 "reset": true, 00:09:08.660 "nvme_admin": false, 00:09:08.660 "nvme_io": false, 00:09:08.660 "nvme_io_md": false, 00:09:08.660 "write_zeroes": true, 00:09:08.660 "zcopy": true, 00:09:08.660 "get_zone_info": false, 00:09:08.660 "zone_management": false, 00:09:08.660 "zone_append": false, 00:09:08.660 "compare": false, 00:09:08.660 "compare_and_write": false, 00:09:08.660 "abort": true, 00:09:08.660 "seek_hole": false, 00:09:08.660 "seek_data": false, 00:09:08.660 "copy": true, 00:09:08.660 "nvme_iov_md": false 00:09:08.660 }, 00:09:08.660 "memory_domains": [ 00:09:08.660 { 00:09:08.660 "dma_device_id": "system", 00:09:08.660 "dma_device_type": 1 00:09:08.660 }, 00:09:08.660 { 00:09:08.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.660 "dma_device_type": 2 00:09:08.660 } 00:09:08.660 ], 00:09:08.660 "driver_specific": {} 00:09:08.660 } 00:09:08.660 ] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.660 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.661 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.661 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.661 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.921 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.921 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.921 "name": "Existed_Raid", 00:09:08.921 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:08.921 "strip_size_kb": 64, 00:09:08.921 "state": "configuring", 00:09:08.921 "raid_level": "raid0", 00:09:08.921 "superblock": true, 00:09:08.921 "num_base_bdevs": 3, 00:09:08.921 "num_base_bdevs_discovered": 2, 00:09:08.921 "num_base_bdevs_operational": 3, 00:09:08.921 "base_bdevs_list": [ 00:09:08.921 { 00:09:08.921 "name": "BaseBdev1", 00:09:08.921 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:08.921 "is_configured": true, 00:09:08.921 "data_offset": 2048, 00:09:08.921 "data_size": 63488 00:09:08.921 }, 00:09:08.921 { 00:09:08.921 "name": null, 00:09:08.921 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:08.921 "is_configured": false, 00:09:08.921 "data_offset": 0, 00:09:08.921 "data_size": 63488 00:09:08.921 }, 00:09:08.921 { 00:09:08.921 "name": "BaseBdev3", 00:09:08.921 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:08.921 "is_configured": true, 00:09:08.921 "data_offset": 2048, 00:09:08.921 "data_size": 63488 00:09:08.921 } 00:09:08.921 ] 00:09:08.921 }' 00:09:08.921 13:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.921 13:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.182 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.183 [2024-11-18 13:25:39.142308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.183 "name": "Existed_Raid", 00:09:09.183 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:09.183 "strip_size_kb": 64, 00:09:09.183 "state": "configuring", 00:09:09.183 "raid_level": "raid0", 00:09:09.183 "superblock": true, 00:09:09.183 "num_base_bdevs": 3, 00:09:09.183 "num_base_bdevs_discovered": 1, 00:09:09.183 "num_base_bdevs_operational": 3, 00:09:09.183 "base_bdevs_list": [ 00:09:09.183 { 00:09:09.183 "name": "BaseBdev1", 00:09:09.183 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:09.183 "is_configured": true, 00:09:09.183 "data_offset": 2048, 00:09:09.183 "data_size": 63488 00:09:09.183 }, 00:09:09.183 { 00:09:09.183 "name": null, 00:09:09.183 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:09.183 "is_configured": false, 00:09:09.183 "data_offset": 0, 00:09:09.183 "data_size": 63488 00:09:09.183 }, 00:09:09.183 { 00:09:09.183 "name": null, 00:09:09.183 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:09.183 "is_configured": false, 00:09:09.183 "data_offset": 0, 00:09:09.183 "data_size": 63488 00:09:09.183 } 00:09:09.183 ] 00:09:09.183 }' 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.183 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.752 [2024-11-18 13:25:39.717334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.752 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.753 "name": "Existed_Raid", 00:09:09.753 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:09.753 "strip_size_kb": 64, 00:09:09.753 "state": "configuring", 00:09:09.753 "raid_level": "raid0", 00:09:09.753 "superblock": true, 00:09:09.753 "num_base_bdevs": 3, 00:09:09.753 "num_base_bdevs_discovered": 2, 00:09:09.753 "num_base_bdevs_operational": 3, 00:09:09.753 "base_bdevs_list": [ 00:09:09.753 { 00:09:09.753 "name": "BaseBdev1", 00:09:09.753 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:09.753 "is_configured": true, 00:09:09.753 "data_offset": 2048, 00:09:09.753 "data_size": 63488 00:09:09.753 }, 00:09:09.753 { 00:09:09.753 "name": null, 00:09:09.753 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:09.753 "is_configured": false, 00:09:09.753 "data_offset": 0, 00:09:09.753 "data_size": 63488 00:09:09.753 }, 00:09:09.753 { 00:09:09.753 "name": "BaseBdev3", 00:09:09.753 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:09.753 "is_configured": true, 00:09:09.753 "data_offset": 2048, 00:09:09.753 "data_size": 63488 00:09:09.753 } 00:09:09.753 ] 00:09:09.753 }' 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.753 13:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.323 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.323 [2024-11-18 13:25:40.288367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.583 "name": "Existed_Raid", 00:09:10.583 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:10.583 "strip_size_kb": 64, 00:09:10.583 "state": "configuring", 00:09:10.583 "raid_level": "raid0", 00:09:10.583 "superblock": true, 00:09:10.583 "num_base_bdevs": 3, 00:09:10.583 "num_base_bdevs_discovered": 1, 00:09:10.583 "num_base_bdevs_operational": 3, 00:09:10.583 "base_bdevs_list": [ 00:09:10.583 { 00:09:10.583 "name": null, 00:09:10.583 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:10.583 "is_configured": false, 00:09:10.583 "data_offset": 0, 00:09:10.583 "data_size": 63488 00:09:10.583 }, 00:09:10.583 { 00:09:10.583 "name": null, 00:09:10.583 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:10.583 "is_configured": false, 00:09:10.583 "data_offset": 0, 00:09:10.583 "data_size": 63488 00:09:10.583 }, 00:09:10.583 { 00:09:10.583 "name": "BaseBdev3", 00:09:10.583 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:10.583 "is_configured": true, 00:09:10.583 "data_offset": 2048, 00:09:10.583 "data_size": 63488 00:09:10.583 } 00:09:10.583 ] 00:09:10.583 }' 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.583 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.857 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.857 [2024-11-18 13:25:40.898009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.165 "name": "Existed_Raid", 00:09:11.165 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:11.165 "strip_size_kb": 64, 00:09:11.165 "state": "configuring", 00:09:11.165 "raid_level": "raid0", 00:09:11.165 "superblock": true, 00:09:11.165 "num_base_bdevs": 3, 00:09:11.165 "num_base_bdevs_discovered": 2, 00:09:11.165 "num_base_bdevs_operational": 3, 00:09:11.165 "base_bdevs_list": [ 00:09:11.165 { 00:09:11.165 "name": null, 00:09:11.165 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:11.165 "is_configured": false, 00:09:11.165 "data_offset": 0, 00:09:11.165 "data_size": 63488 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "name": "BaseBdev2", 00:09:11.165 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:11.165 "is_configured": true, 00:09:11.165 "data_offset": 2048, 00:09:11.165 "data_size": 63488 00:09:11.165 }, 00:09:11.165 { 00:09:11.165 "name": "BaseBdev3", 00:09:11.165 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:11.165 "is_configured": true, 00:09:11.165 "data_offset": 2048, 00:09:11.165 "data_size": 63488 00:09:11.165 } 00:09:11.165 ] 00:09:11.165 }' 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.165 13:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.425 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.426 [2024-11-18 13:25:41.459510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.426 [2024-11-18 13:25:41.459796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:11.426 [2024-11-18 13:25:41.459848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.426 [2024-11-18 13:25:41.460097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:11.426 [2024-11-18 13:25:41.460296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:11.426 [2024-11-18 13:25:41.460339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:11.426 [2024-11-18 13:25:41.460518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.426 NewBaseBdev 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.426 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.685 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.685 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.685 [ 00:09:11.685 { 00:09:11.685 "name": "NewBaseBdev", 00:09:11.685 "aliases": [ 00:09:11.685 "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b" 00:09:11.685 ], 00:09:11.685 "product_name": "Malloc disk", 00:09:11.685 "block_size": 512, 00:09:11.685 "num_blocks": 65536, 00:09:11.685 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:11.685 "assigned_rate_limits": { 00:09:11.685 "rw_ios_per_sec": 0, 00:09:11.685 "rw_mbytes_per_sec": 0, 00:09:11.685 "r_mbytes_per_sec": 0, 00:09:11.685 "w_mbytes_per_sec": 0 00:09:11.685 }, 00:09:11.685 "claimed": true, 00:09:11.685 "claim_type": "exclusive_write", 00:09:11.685 "zoned": false, 00:09:11.685 "supported_io_types": { 00:09:11.686 "read": true, 00:09:11.686 "write": true, 00:09:11.686 "unmap": true, 00:09:11.686 "flush": true, 00:09:11.686 "reset": true, 00:09:11.686 "nvme_admin": false, 00:09:11.686 "nvme_io": false, 00:09:11.686 "nvme_io_md": false, 00:09:11.686 "write_zeroes": true, 00:09:11.686 "zcopy": true, 00:09:11.686 "get_zone_info": false, 00:09:11.686 "zone_management": false, 00:09:11.686 "zone_append": false, 00:09:11.686 "compare": false, 00:09:11.686 "compare_and_write": false, 00:09:11.686 "abort": true, 00:09:11.686 "seek_hole": false, 00:09:11.686 "seek_data": false, 00:09:11.686 "copy": true, 00:09:11.686 "nvme_iov_md": false 00:09:11.686 }, 00:09:11.686 "memory_domains": [ 00:09:11.686 { 00:09:11.686 "dma_device_id": "system", 00:09:11.686 "dma_device_type": 1 00:09:11.686 }, 00:09:11.686 { 00:09:11.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.686 "dma_device_type": 2 00:09:11.686 } 00:09:11.686 ], 00:09:11.686 "driver_specific": {} 00:09:11.686 } 00:09:11.686 ] 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.686 "name": "Existed_Raid", 00:09:11.686 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:11.686 "strip_size_kb": 64, 00:09:11.686 "state": "online", 00:09:11.686 "raid_level": "raid0", 00:09:11.686 "superblock": true, 00:09:11.686 "num_base_bdevs": 3, 00:09:11.686 "num_base_bdevs_discovered": 3, 00:09:11.686 "num_base_bdevs_operational": 3, 00:09:11.686 "base_bdevs_list": [ 00:09:11.686 { 00:09:11.686 "name": "NewBaseBdev", 00:09:11.686 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:11.686 "is_configured": true, 00:09:11.686 "data_offset": 2048, 00:09:11.686 "data_size": 63488 00:09:11.686 }, 00:09:11.686 { 00:09:11.686 "name": "BaseBdev2", 00:09:11.686 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:11.686 "is_configured": true, 00:09:11.686 "data_offset": 2048, 00:09:11.686 "data_size": 63488 00:09:11.686 }, 00:09:11.686 { 00:09:11.686 "name": "BaseBdev3", 00:09:11.686 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:11.686 "is_configured": true, 00:09:11.686 "data_offset": 2048, 00:09:11.686 "data_size": 63488 00:09:11.686 } 00:09:11.686 ] 00:09:11.686 }' 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.686 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.946 [2024-11-18 13:25:41.915073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.946 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.946 "name": "Existed_Raid", 00:09:11.946 "aliases": [ 00:09:11.946 "8c181886-a64d-4b04-9d3d-d6b73aba5208" 00:09:11.946 ], 00:09:11.946 "product_name": "Raid Volume", 00:09:11.946 "block_size": 512, 00:09:11.946 "num_blocks": 190464, 00:09:11.946 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:11.946 "assigned_rate_limits": { 00:09:11.946 "rw_ios_per_sec": 0, 00:09:11.946 "rw_mbytes_per_sec": 0, 00:09:11.946 "r_mbytes_per_sec": 0, 00:09:11.946 "w_mbytes_per_sec": 0 00:09:11.946 }, 00:09:11.946 "claimed": false, 00:09:11.946 "zoned": false, 00:09:11.946 "supported_io_types": { 00:09:11.946 "read": true, 00:09:11.946 "write": true, 00:09:11.946 "unmap": true, 00:09:11.946 "flush": true, 00:09:11.946 "reset": true, 00:09:11.946 "nvme_admin": false, 00:09:11.946 "nvme_io": false, 00:09:11.946 "nvme_io_md": false, 00:09:11.946 "write_zeroes": true, 00:09:11.946 "zcopy": false, 00:09:11.946 "get_zone_info": false, 00:09:11.946 "zone_management": false, 00:09:11.947 "zone_append": false, 00:09:11.947 "compare": false, 00:09:11.947 "compare_and_write": false, 00:09:11.947 "abort": false, 00:09:11.947 "seek_hole": false, 00:09:11.947 "seek_data": false, 00:09:11.947 "copy": false, 00:09:11.947 "nvme_iov_md": false 00:09:11.947 }, 00:09:11.947 "memory_domains": [ 00:09:11.947 { 00:09:11.947 "dma_device_id": "system", 00:09:11.947 "dma_device_type": 1 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.947 "dma_device_type": 2 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "dma_device_id": "system", 00:09:11.947 "dma_device_type": 1 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.947 "dma_device_type": 2 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "dma_device_id": "system", 00:09:11.947 "dma_device_type": 1 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.947 "dma_device_type": 2 00:09:11.947 } 00:09:11.947 ], 00:09:11.947 "driver_specific": { 00:09:11.947 "raid": { 00:09:11.947 "uuid": "8c181886-a64d-4b04-9d3d-d6b73aba5208", 00:09:11.947 "strip_size_kb": 64, 00:09:11.947 "state": "online", 00:09:11.947 "raid_level": "raid0", 00:09:11.947 "superblock": true, 00:09:11.947 "num_base_bdevs": 3, 00:09:11.947 "num_base_bdevs_discovered": 3, 00:09:11.947 "num_base_bdevs_operational": 3, 00:09:11.947 "base_bdevs_list": [ 00:09:11.947 { 00:09:11.947 "name": "NewBaseBdev", 00:09:11.947 "uuid": "47a153dc-13e8-4a0e-b9ce-b79a2f2d5f9b", 00:09:11.947 "is_configured": true, 00:09:11.947 "data_offset": 2048, 00:09:11.947 "data_size": 63488 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "name": "BaseBdev2", 00:09:11.947 "uuid": "0653122a-13a7-47f1-a593-93896bcd5720", 00:09:11.947 "is_configured": true, 00:09:11.947 "data_offset": 2048, 00:09:11.947 "data_size": 63488 00:09:11.947 }, 00:09:11.947 { 00:09:11.947 "name": "BaseBdev3", 00:09:11.947 "uuid": "8291c0dd-2531-48d6-94c0-1eed8b958af3", 00:09:11.947 "is_configured": true, 00:09:11.947 "data_offset": 2048, 00:09:11.947 "data_size": 63488 00:09:11.947 } 00:09:11.947 ] 00:09:11.947 } 00:09:11.947 } 00:09:11.947 }' 00:09:11.947 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.947 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:11.947 BaseBdev2 00:09:11.947 BaseBdev3' 00:09:12.208 13:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.208 [2024-11-18 13:25:42.206420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.208 [2024-11-18 13:25:42.206465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.208 [2024-11-18 13:25:42.206562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.208 [2024-11-18 13:25:42.206618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.208 [2024-11-18 13:25:42.206631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64471 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64471 ']' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64471 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64471 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64471' 00:09:12.208 killing process with pid 64471 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64471 00:09:12.208 [2024-11-18 13:25:42.256836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.208 13:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64471 00:09:12.778 [2024-11-18 13:25:42.552643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.715 13:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.715 00:09:13.715 real 0m10.755s 00:09:13.715 user 0m17.081s 00:09:13.715 sys 0m1.974s 00:09:13.715 ************************************ 00:09:13.715 END TEST raid_state_function_test_sb 00:09:13.715 ************************************ 00:09:13.715 13:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.715 13:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.715 13:25:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:13.715 13:25:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.715 13:25:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.715 13:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.715 ************************************ 00:09:13.715 START TEST raid_superblock_test 00:09:13.715 ************************************ 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:13.715 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65091 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65091 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65091 ']' 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.716 13:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.975 [2024-11-18 13:25:43.814405] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:13.975 [2024-11-18 13:25:43.814638] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65091 ] 00:09:13.975 [2024-11-18 13:25:43.994275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.255 [2024-11-18 13:25:44.107513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.515 [2024-11-18 13:25:44.307997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.515 [2024-11-18 13:25:44.308058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 malloc1 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 [2024-11-18 13:25:44.672315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:14.775 [2024-11-18 13:25:44.672454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.775 [2024-11-18 13:25:44.672498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:14.775 [2024-11-18 13:25:44.672529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.775 [2024-11-18 13:25:44.674651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.775 [2024-11-18 13:25:44.674724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:14.775 pt1 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 malloc2 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 [2024-11-18 13:25:44.731233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.775 [2024-11-18 13:25:44.731376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.775 [2024-11-18 13:25:44.731402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:14.775 [2024-11-18 13:25:44.731411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.775 [2024-11-18 13:25:44.733419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.775 [2024-11-18 13:25:44.733457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.775 pt2 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 malloc3 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.775 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.775 [2024-11-18 13:25:44.799462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.775 [2024-11-18 13:25:44.799604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.776 [2024-11-18 13:25:44.799646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:14.776 [2024-11-18 13:25:44.799677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.776 [2024-11-18 13:25:44.801666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.776 [2024-11-18 13:25:44.801737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.776 pt3 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.776 [2024-11-18 13:25:44.811490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:14.776 [2024-11-18 13:25:44.813251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.776 [2024-11-18 13:25:44.813351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.776 [2024-11-18 13:25:44.813527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:14.776 [2024-11-18 13:25:44.813586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.776 [2024-11-18 13:25:44.813838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:14.776 [2024-11-18 13:25:44.814024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:14.776 [2024-11-18 13:25:44.814066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:14.776 [2024-11-18 13:25:44.814264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.776 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.035 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.035 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.035 "name": "raid_bdev1", 00:09:15.035 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:15.035 "strip_size_kb": 64, 00:09:15.035 "state": "online", 00:09:15.035 "raid_level": "raid0", 00:09:15.035 "superblock": true, 00:09:15.035 "num_base_bdevs": 3, 00:09:15.035 "num_base_bdevs_discovered": 3, 00:09:15.035 "num_base_bdevs_operational": 3, 00:09:15.035 "base_bdevs_list": [ 00:09:15.035 { 00:09:15.035 "name": "pt1", 00:09:15.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.035 "is_configured": true, 00:09:15.035 "data_offset": 2048, 00:09:15.035 "data_size": 63488 00:09:15.035 }, 00:09:15.035 { 00:09:15.035 "name": "pt2", 00:09:15.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.035 "is_configured": true, 00:09:15.035 "data_offset": 2048, 00:09:15.035 "data_size": 63488 00:09:15.035 }, 00:09:15.035 { 00:09:15.035 "name": "pt3", 00:09:15.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.035 "is_configured": true, 00:09:15.035 "data_offset": 2048, 00:09:15.035 "data_size": 63488 00:09:15.035 } 00:09:15.035 ] 00:09:15.035 }' 00:09:15.035 13:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.035 13:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.294 [2024-11-18 13:25:45.275016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.294 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.294 "name": "raid_bdev1", 00:09:15.294 "aliases": [ 00:09:15.294 "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8" 00:09:15.294 ], 00:09:15.294 "product_name": "Raid Volume", 00:09:15.294 "block_size": 512, 00:09:15.294 "num_blocks": 190464, 00:09:15.294 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:15.294 "assigned_rate_limits": { 00:09:15.294 "rw_ios_per_sec": 0, 00:09:15.294 "rw_mbytes_per_sec": 0, 00:09:15.294 "r_mbytes_per_sec": 0, 00:09:15.294 "w_mbytes_per_sec": 0 00:09:15.294 }, 00:09:15.294 "claimed": false, 00:09:15.294 "zoned": false, 00:09:15.294 "supported_io_types": { 00:09:15.294 "read": true, 00:09:15.294 "write": true, 00:09:15.294 "unmap": true, 00:09:15.294 "flush": true, 00:09:15.294 "reset": true, 00:09:15.294 "nvme_admin": false, 00:09:15.294 "nvme_io": false, 00:09:15.294 "nvme_io_md": false, 00:09:15.294 "write_zeroes": true, 00:09:15.294 "zcopy": false, 00:09:15.294 "get_zone_info": false, 00:09:15.294 "zone_management": false, 00:09:15.294 "zone_append": false, 00:09:15.294 "compare": false, 00:09:15.294 "compare_and_write": false, 00:09:15.294 "abort": false, 00:09:15.294 "seek_hole": false, 00:09:15.294 "seek_data": false, 00:09:15.294 "copy": false, 00:09:15.294 "nvme_iov_md": false 00:09:15.294 }, 00:09:15.294 "memory_domains": [ 00:09:15.294 { 00:09:15.294 "dma_device_id": "system", 00:09:15.294 "dma_device_type": 1 00:09:15.294 }, 00:09:15.294 { 00:09:15.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.294 "dma_device_type": 2 00:09:15.294 }, 00:09:15.294 { 00:09:15.294 "dma_device_id": "system", 00:09:15.294 "dma_device_type": 1 00:09:15.294 }, 00:09:15.295 { 00:09:15.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.295 "dma_device_type": 2 00:09:15.295 }, 00:09:15.295 { 00:09:15.295 "dma_device_id": "system", 00:09:15.295 "dma_device_type": 1 00:09:15.295 }, 00:09:15.295 { 00:09:15.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.295 "dma_device_type": 2 00:09:15.295 } 00:09:15.295 ], 00:09:15.295 "driver_specific": { 00:09:15.295 "raid": { 00:09:15.295 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:15.295 "strip_size_kb": 64, 00:09:15.295 "state": "online", 00:09:15.295 "raid_level": "raid0", 00:09:15.295 "superblock": true, 00:09:15.295 "num_base_bdevs": 3, 00:09:15.295 "num_base_bdevs_discovered": 3, 00:09:15.295 "num_base_bdevs_operational": 3, 00:09:15.295 "base_bdevs_list": [ 00:09:15.295 { 00:09:15.295 "name": "pt1", 00:09:15.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.295 "is_configured": true, 00:09:15.295 "data_offset": 2048, 00:09:15.295 "data_size": 63488 00:09:15.295 }, 00:09:15.295 { 00:09:15.295 "name": "pt2", 00:09:15.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.295 "is_configured": true, 00:09:15.295 "data_offset": 2048, 00:09:15.295 "data_size": 63488 00:09:15.295 }, 00:09:15.295 { 00:09:15.295 "name": "pt3", 00:09:15.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.295 "is_configured": true, 00:09:15.295 "data_offset": 2048, 00:09:15.295 "data_size": 63488 00:09:15.295 } 00:09:15.295 ] 00:09:15.295 } 00:09:15.295 } 00:09:15.295 }' 00:09:15.295 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.553 pt2 00:09:15.553 pt3' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.553 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.554 [2024-11-18 13:25:45.558441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1d65a1ff-2cfe-4880-80a0-a3950e41bcc8 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1d65a1ff-2cfe-4880-80a0-a3950e41bcc8 ']' 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.554 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.554 [2024-11-18 13:25:45.602088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.554 [2024-11-18 13:25:45.602198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.554 [2024-11-18 13:25:45.602285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.554 [2024-11-18 13:25:45.602346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.554 [2024-11-18 13:25:45.602356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 [2024-11-18 13:25:45.753877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:15.814 [2024-11-18 13:25:45.755735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:15.814 [2024-11-18 13:25:45.755790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:15.814 [2024-11-18 13:25:45.755837] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:15.814 [2024-11-18 13:25:45.755886] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:15.814 [2024-11-18 13:25:45.755905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:15.814 [2024-11-18 13:25:45.755922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.814 [2024-11-18 13:25:45.755933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:15.814 request: 00:09:15.814 { 00:09:15.814 "name": "raid_bdev1", 00:09:15.814 "raid_level": "raid0", 00:09:15.814 "base_bdevs": [ 00:09:15.814 "malloc1", 00:09:15.814 "malloc2", 00:09:15.814 "malloc3" 00:09:15.814 ], 00:09:15.814 "strip_size_kb": 64, 00:09:15.814 "superblock": false, 00:09:15.814 "method": "bdev_raid_create", 00:09:15.814 "req_id": 1 00:09:15.814 } 00:09:15.814 Got JSON-RPC error response 00:09:15.814 response: 00:09:15.814 { 00:09:15.814 "code": -17, 00:09:15.814 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:15.814 } 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:15.814 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.815 [2024-11-18 13:25:45.825701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.815 [2024-11-18 13:25:45.825749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.815 [2024-11-18 13:25:45.825768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:15.815 [2024-11-18 13:25:45.825777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.815 [2024-11-18 13:25:45.827883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.815 [2024-11-18 13:25:45.827920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.815 [2024-11-18 13:25:45.827995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:15.815 [2024-11-18 13:25:45.828052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.815 pt1 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.815 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.075 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.075 "name": "raid_bdev1", 00:09:16.075 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:16.075 "strip_size_kb": 64, 00:09:16.075 "state": "configuring", 00:09:16.075 "raid_level": "raid0", 00:09:16.075 "superblock": true, 00:09:16.075 "num_base_bdevs": 3, 00:09:16.075 "num_base_bdevs_discovered": 1, 00:09:16.075 "num_base_bdevs_operational": 3, 00:09:16.075 "base_bdevs_list": [ 00:09:16.075 { 00:09:16.075 "name": "pt1", 00:09:16.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.075 "is_configured": true, 00:09:16.075 "data_offset": 2048, 00:09:16.075 "data_size": 63488 00:09:16.075 }, 00:09:16.075 { 00:09:16.075 "name": null, 00:09:16.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.075 "is_configured": false, 00:09:16.075 "data_offset": 2048, 00:09:16.075 "data_size": 63488 00:09:16.075 }, 00:09:16.075 { 00:09:16.075 "name": null, 00:09:16.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.075 "is_configured": false, 00:09:16.075 "data_offset": 2048, 00:09:16.075 "data_size": 63488 00:09:16.075 } 00:09:16.075 ] 00:09:16.075 }' 00:09:16.075 13:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.075 13:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.334 [2024-11-18 13:25:46.229069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.334 [2024-11-18 13:25:46.229203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.334 [2024-11-18 13:25:46.229246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:16.334 [2024-11-18 13:25:46.229276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.334 [2024-11-18 13:25:46.229722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.334 [2024-11-18 13:25:46.229784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.334 [2024-11-18 13:25:46.229904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.334 [2024-11-18 13:25:46.229956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.334 pt2 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.334 [2024-11-18 13:25:46.241046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.334 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.334 "name": "raid_bdev1", 00:09:16.334 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:16.334 "strip_size_kb": 64, 00:09:16.334 "state": "configuring", 00:09:16.334 "raid_level": "raid0", 00:09:16.334 "superblock": true, 00:09:16.334 "num_base_bdevs": 3, 00:09:16.334 "num_base_bdevs_discovered": 1, 00:09:16.334 "num_base_bdevs_operational": 3, 00:09:16.334 "base_bdevs_list": [ 00:09:16.334 { 00:09:16.334 "name": "pt1", 00:09:16.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.334 "is_configured": true, 00:09:16.334 "data_offset": 2048, 00:09:16.334 "data_size": 63488 00:09:16.334 }, 00:09:16.334 { 00:09:16.334 "name": null, 00:09:16.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.335 "is_configured": false, 00:09:16.335 "data_offset": 0, 00:09:16.335 "data_size": 63488 00:09:16.335 }, 00:09:16.335 { 00:09:16.335 "name": null, 00:09:16.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.335 "is_configured": false, 00:09:16.335 "data_offset": 2048, 00:09:16.335 "data_size": 63488 00:09:16.335 } 00:09:16.335 ] 00:09:16.335 }' 00:09:16.335 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.335 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.905 [2024-11-18 13:25:46.672275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.905 [2024-11-18 13:25:46.672365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.905 [2024-11-18 13:25:46.672385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:16.905 [2024-11-18 13:25:46.672397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.905 [2024-11-18 13:25:46.672841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.905 [2024-11-18 13:25:46.672863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.905 [2024-11-18 13:25:46.672943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.905 [2024-11-18 13:25:46.672968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.905 pt2 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.905 [2024-11-18 13:25:46.684247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.905 [2024-11-18 13:25:46.684376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.905 [2024-11-18 13:25:46.684393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:16.905 [2024-11-18 13:25:46.684403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.905 [2024-11-18 13:25:46.684753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.905 [2024-11-18 13:25:46.684775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.905 [2024-11-18 13:25:46.684836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:16.905 [2024-11-18 13:25:46.684857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.905 [2024-11-18 13:25:46.684958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:16.905 [2024-11-18 13:25:46.684968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:16.905 [2024-11-18 13:25:46.685218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:16.905 [2024-11-18 13:25:46.685372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:16.905 [2024-11-18 13:25:46.685381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:16.905 [2024-11-18 13:25:46.685503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.905 pt3 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.905 "name": "raid_bdev1", 00:09:16.905 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:16.905 "strip_size_kb": 64, 00:09:16.905 "state": "online", 00:09:16.905 "raid_level": "raid0", 00:09:16.905 "superblock": true, 00:09:16.905 "num_base_bdevs": 3, 00:09:16.905 "num_base_bdevs_discovered": 3, 00:09:16.905 "num_base_bdevs_operational": 3, 00:09:16.905 "base_bdevs_list": [ 00:09:16.905 { 00:09:16.905 "name": "pt1", 00:09:16.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.905 "is_configured": true, 00:09:16.905 "data_offset": 2048, 00:09:16.905 "data_size": 63488 00:09:16.905 }, 00:09:16.905 { 00:09:16.905 "name": "pt2", 00:09:16.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.905 "is_configured": true, 00:09:16.905 "data_offset": 2048, 00:09:16.905 "data_size": 63488 00:09:16.905 }, 00:09:16.905 { 00:09:16.905 "name": "pt3", 00:09:16.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.905 "is_configured": true, 00:09:16.905 "data_offset": 2048, 00:09:16.905 "data_size": 63488 00:09:16.905 } 00:09:16.905 ] 00:09:16.905 }' 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.905 13:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 [2024-11-18 13:25:47.135776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.165 "name": "raid_bdev1", 00:09:17.165 "aliases": [ 00:09:17.165 "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8" 00:09:17.165 ], 00:09:17.165 "product_name": "Raid Volume", 00:09:17.165 "block_size": 512, 00:09:17.165 "num_blocks": 190464, 00:09:17.165 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:17.165 "assigned_rate_limits": { 00:09:17.165 "rw_ios_per_sec": 0, 00:09:17.165 "rw_mbytes_per_sec": 0, 00:09:17.165 "r_mbytes_per_sec": 0, 00:09:17.165 "w_mbytes_per_sec": 0 00:09:17.165 }, 00:09:17.165 "claimed": false, 00:09:17.165 "zoned": false, 00:09:17.165 "supported_io_types": { 00:09:17.165 "read": true, 00:09:17.165 "write": true, 00:09:17.165 "unmap": true, 00:09:17.165 "flush": true, 00:09:17.165 "reset": true, 00:09:17.165 "nvme_admin": false, 00:09:17.165 "nvme_io": false, 00:09:17.165 "nvme_io_md": false, 00:09:17.165 "write_zeroes": true, 00:09:17.165 "zcopy": false, 00:09:17.165 "get_zone_info": false, 00:09:17.165 "zone_management": false, 00:09:17.165 "zone_append": false, 00:09:17.165 "compare": false, 00:09:17.165 "compare_and_write": false, 00:09:17.165 "abort": false, 00:09:17.165 "seek_hole": false, 00:09:17.165 "seek_data": false, 00:09:17.165 "copy": false, 00:09:17.165 "nvme_iov_md": false 00:09:17.165 }, 00:09:17.165 "memory_domains": [ 00:09:17.165 { 00:09:17.165 "dma_device_id": "system", 00:09:17.165 "dma_device_type": 1 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.165 "dma_device_type": 2 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "dma_device_id": "system", 00:09:17.165 "dma_device_type": 1 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.165 "dma_device_type": 2 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "dma_device_id": "system", 00:09:17.165 "dma_device_type": 1 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.165 "dma_device_type": 2 00:09:17.165 } 00:09:17.165 ], 00:09:17.165 "driver_specific": { 00:09:17.165 "raid": { 00:09:17.165 "uuid": "1d65a1ff-2cfe-4880-80a0-a3950e41bcc8", 00:09:17.165 "strip_size_kb": 64, 00:09:17.165 "state": "online", 00:09:17.165 "raid_level": "raid0", 00:09:17.165 "superblock": true, 00:09:17.165 "num_base_bdevs": 3, 00:09:17.165 "num_base_bdevs_discovered": 3, 00:09:17.165 "num_base_bdevs_operational": 3, 00:09:17.165 "base_bdevs_list": [ 00:09:17.165 { 00:09:17.165 "name": "pt1", 00:09:17.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.165 "is_configured": true, 00:09:17.165 "data_offset": 2048, 00:09:17.165 "data_size": 63488 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "name": "pt2", 00:09:17.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.165 "is_configured": true, 00:09:17.165 "data_offset": 2048, 00:09:17.165 "data_size": 63488 00:09:17.165 }, 00:09:17.165 { 00:09:17.165 "name": "pt3", 00:09:17.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.165 "is_configured": true, 00:09:17.165 "data_offset": 2048, 00:09:17.165 "data_size": 63488 00:09:17.165 } 00:09:17.165 ] 00:09:17.165 } 00:09:17.165 } 00:09:17.165 }' 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.165 pt2 00:09:17.165 pt3' 00:09:17.165 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:17.425 [2024-11-18 13:25:47.419227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1d65a1ff-2cfe-4880-80a0-a3950e41bcc8 '!=' 1d65a1ff-2cfe-4880-80a0-a3950e41bcc8 ']' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65091 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65091 ']' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65091 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.425 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65091 00:09:17.684 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.684 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.684 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65091' 00:09:17.684 killing process with pid 65091 00:09:17.684 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65091 00:09:17.684 [2024-11-18 13:25:47.507234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.684 [2024-11-18 13:25:47.507438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.684 13:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65091 00:09:17.684 [2024-11-18 13:25:47.507531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.684 [2024-11-18 13:25:47.507580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:17.943 [2024-11-18 13:25:47.799354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.882 13:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:18.882 00:09:18.882 real 0m5.175s 00:09:18.882 user 0m7.365s 00:09:18.882 sys 0m0.954s 00:09:18.882 13:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.882 13:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.882 ************************************ 00:09:18.882 END TEST raid_superblock_test 00:09:18.882 ************************************ 00:09:19.142 13:25:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:19.142 13:25:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.142 13:25:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.142 13:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.142 ************************************ 00:09:19.142 START TEST raid_read_error_test 00:09:19.142 ************************************ 00:09:19.142 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:19.142 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Yh6ZhZXn8N 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65343 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65343 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65343 ']' 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.143 13:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.143 [2024-11-18 13:25:49.056935] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:19.143 [2024-11-18 13:25:49.057160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65343 ] 00:09:19.402 [2024-11-18 13:25:49.236113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.402 [2024-11-18 13:25:49.347130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.660 [2024-11-18 13:25:49.539753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.660 [2024-11-18 13:25:49.539892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.920 BaseBdev1_malloc 00:09:19.920 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.921 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:19.921 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.921 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 true 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 [2024-11-18 13:25:49.977315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.181 [2024-11-18 13:25:49.977469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.181 [2024-11-18 13:25:49.977503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.181 [2024-11-18 13:25:49.977513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.181 [2024-11-18 13:25:49.979607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.181 [2024-11-18 13:25:49.979649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.181 BaseBdev1 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 BaseBdev2_malloc 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 true 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 [2024-11-18 13:25:50.042635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.181 [2024-11-18 13:25:50.042691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.181 [2024-11-18 13:25:50.042707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:20.181 [2024-11-18 13:25:50.042718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.181 [2024-11-18 13:25:50.044752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.181 [2024-11-18 13:25:50.044792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.181 BaseBdev2 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 BaseBdev3_malloc 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 true 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 [2024-11-18 13:25:50.118412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.181 [2024-11-18 13:25:50.118546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.181 [2024-11-18 13:25:50.118565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:20.181 [2024-11-18 13:25:50.118577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.181 [2024-11-18 13:25:50.120587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.181 [2024-11-18 13:25:50.120640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.181 BaseBdev3 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.181 [2024-11-18 13:25:50.130461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.181 [2024-11-18 13:25:50.132213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.181 [2024-11-18 13:25:50.132290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.181 [2024-11-18 13:25:50.132474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:20.181 [2024-11-18 13:25:50.132491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.181 [2024-11-18 13:25:50.132722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:20.181 [2024-11-18 13:25:50.132859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:20.181 [2024-11-18 13:25:50.132872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:20.181 [2024-11-18 13:25:50.132996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.181 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.182 "name": "raid_bdev1", 00:09:20.182 "uuid": "7fc7e5b6-a8f7-493f-b6fe-2f3ed14d4db3", 00:09:20.182 "strip_size_kb": 64, 00:09:20.182 "state": "online", 00:09:20.182 "raid_level": "raid0", 00:09:20.182 "superblock": true, 00:09:20.182 "num_base_bdevs": 3, 00:09:20.182 "num_base_bdevs_discovered": 3, 00:09:20.182 "num_base_bdevs_operational": 3, 00:09:20.182 "base_bdevs_list": [ 00:09:20.182 { 00:09:20.182 "name": "BaseBdev1", 00:09:20.182 "uuid": "ad18663a-e37d-514d-954a-c57f047b06fb", 00:09:20.182 "is_configured": true, 00:09:20.182 "data_offset": 2048, 00:09:20.182 "data_size": 63488 00:09:20.182 }, 00:09:20.182 { 00:09:20.182 "name": "BaseBdev2", 00:09:20.182 "uuid": "c28fa333-9ef1-5c5b-99d7-154df0919406", 00:09:20.182 "is_configured": true, 00:09:20.182 "data_offset": 2048, 00:09:20.182 "data_size": 63488 00:09:20.182 }, 00:09:20.182 { 00:09:20.182 "name": "BaseBdev3", 00:09:20.182 "uuid": "a54c6326-6fe5-5522-bc6d-ba06344b3546", 00:09:20.182 "is_configured": true, 00:09:20.182 "data_offset": 2048, 00:09:20.182 "data_size": 63488 00:09:20.182 } 00:09:20.182 ] 00:09:20.182 }' 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.182 13:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.751 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:20.751 13:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:20.751 [2024-11-18 13:25:50.690755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.692 "name": "raid_bdev1", 00:09:21.692 "uuid": "7fc7e5b6-a8f7-493f-b6fe-2f3ed14d4db3", 00:09:21.692 "strip_size_kb": 64, 00:09:21.692 "state": "online", 00:09:21.692 "raid_level": "raid0", 00:09:21.692 "superblock": true, 00:09:21.692 "num_base_bdevs": 3, 00:09:21.692 "num_base_bdevs_discovered": 3, 00:09:21.692 "num_base_bdevs_operational": 3, 00:09:21.692 "base_bdevs_list": [ 00:09:21.692 { 00:09:21.692 "name": "BaseBdev1", 00:09:21.692 "uuid": "ad18663a-e37d-514d-954a-c57f047b06fb", 00:09:21.692 "is_configured": true, 00:09:21.692 "data_offset": 2048, 00:09:21.692 "data_size": 63488 00:09:21.692 }, 00:09:21.692 { 00:09:21.692 "name": "BaseBdev2", 00:09:21.692 "uuid": "c28fa333-9ef1-5c5b-99d7-154df0919406", 00:09:21.692 "is_configured": true, 00:09:21.692 "data_offset": 2048, 00:09:21.692 "data_size": 63488 00:09:21.692 }, 00:09:21.692 { 00:09:21.692 "name": "BaseBdev3", 00:09:21.692 "uuid": "a54c6326-6fe5-5522-bc6d-ba06344b3546", 00:09:21.692 "is_configured": true, 00:09:21.692 "data_offset": 2048, 00:09:21.692 "data_size": 63488 00:09:21.692 } 00:09:21.692 ] 00:09:21.692 }' 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.692 13:25:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.261 [2024-11-18 13:25:52.062601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.261 [2024-11-18 13:25:52.062747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.261 [2024-11-18 13:25:52.065400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.261 [2024-11-18 13:25:52.065442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.261 [2024-11-18 13:25:52.065479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.261 [2024-11-18 13:25:52.065488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:22.261 { 00:09:22.261 "results": [ 00:09:22.261 { 00:09:22.261 "job": "raid_bdev1", 00:09:22.261 "core_mask": "0x1", 00:09:22.261 "workload": "randrw", 00:09:22.261 "percentage": 50, 00:09:22.261 "status": "finished", 00:09:22.261 "queue_depth": 1, 00:09:22.261 "io_size": 131072, 00:09:22.261 "runtime": 1.372884, 00:09:22.261 "iops": 16060.3517850015, 00:09:22.261 "mibps": 2007.5439731251874, 00:09:22.261 "io_failed": 1, 00:09:22.261 "io_timeout": 0, 00:09:22.261 "avg_latency_us": 86.58571274099158, 00:09:22.261 "min_latency_us": 19.89868995633188, 00:09:22.261 "max_latency_us": 1352.216593886463 00:09:22.261 } 00:09:22.261 ], 00:09:22.261 "core_count": 1 00:09:22.261 } 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65343 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65343 ']' 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65343 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65343 00:09:22.261 killing process with pid 65343 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65343' 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65343 00:09:22.261 [2024-11-18 13:25:52.112807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.261 13:25:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65343 00:09:22.521 [2024-11-18 13:25:52.343291] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.460 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:23.460 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Yh6ZhZXn8N 00:09:23.460 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:23.720 ************************************ 00:09:23.720 END TEST raid_read_error_test 00:09:23.720 ************************************ 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:23.720 00:09:23.720 real 0m4.557s 00:09:23.720 user 0m5.455s 00:09:23.720 sys 0m0.585s 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.720 13:25:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.720 13:25:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:23.720 13:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.720 13:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.720 13:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.720 ************************************ 00:09:23.720 START TEST raid_write_error_test 00:09:23.720 ************************************ 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZX9tgTE0s5 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65490 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65490 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65490 ']' 00:09:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.720 13:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.720 [2024-11-18 13:25:53.713087] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:23.720 [2024-11-18 13:25:53.713244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65490 ] 00:09:23.980 [2024-11-18 13:25:53.894194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.980 [2024-11-18 13:25:54.003048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.239 [2024-11-18 13:25:54.202145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.239 [2024-11-18 13:25:54.202204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 BaseBdev1_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 true 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 [2024-11-18 13:25:54.607965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.809 [2024-11-18 13:25:54.608027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.809 [2024-11-18 13:25:54.608045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.809 [2024-11-18 13:25:54.608055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.809 [2024-11-18 13:25:54.609981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.809 [2024-11-18 13:25:54.610102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.809 BaseBdev1 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 BaseBdev2_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 true 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 [2024-11-18 13:25:54.670741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.809 [2024-11-18 13:25:54.670798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.809 [2024-11-18 13:25:54.670813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.809 [2024-11-18 13:25:54.670823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.809 [2024-11-18 13:25:54.672769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.809 [2024-11-18 13:25:54.672812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.809 BaseBdev2 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 BaseBdev3_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 true 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.809 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.809 [2024-11-18 13:25:54.744079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.809 [2024-11-18 13:25:54.744239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.809 [2024-11-18 13:25:54.744260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:24.809 [2024-11-18 13:25:54.744270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.809 [2024-11-18 13:25:54.746194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.810 [2024-11-18 13:25:54.746233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:24.810 BaseBdev3 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.810 [2024-11-18 13:25:54.756124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.810 [2024-11-18 13:25:54.757783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.810 [2024-11-18 13:25:54.757862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.810 [2024-11-18 13:25:54.758045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.810 [2024-11-18 13:25:54.758058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.810 [2024-11-18 13:25:54.758294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:24.810 [2024-11-18 13:25:54.758463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.810 [2024-11-18 13:25:54.758476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:24.810 [2024-11-18 13:25:54.758619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.810 "name": "raid_bdev1", 00:09:24.810 "uuid": "f330bb69-aece-43e3-9e11-342cee121d3b", 00:09:24.810 "strip_size_kb": 64, 00:09:24.810 "state": "online", 00:09:24.810 "raid_level": "raid0", 00:09:24.810 "superblock": true, 00:09:24.810 "num_base_bdevs": 3, 00:09:24.810 "num_base_bdevs_discovered": 3, 00:09:24.810 "num_base_bdevs_operational": 3, 00:09:24.810 "base_bdevs_list": [ 00:09:24.810 { 00:09:24.810 "name": "BaseBdev1", 00:09:24.810 "uuid": "db4930a6-2cf0-5dea-ab05-cb245692a4bf", 00:09:24.810 "is_configured": true, 00:09:24.810 "data_offset": 2048, 00:09:24.810 "data_size": 63488 00:09:24.810 }, 00:09:24.810 { 00:09:24.810 "name": "BaseBdev2", 00:09:24.810 "uuid": "96e8baf9-2339-525c-83ec-5357d6b6b51e", 00:09:24.810 "is_configured": true, 00:09:24.810 "data_offset": 2048, 00:09:24.810 "data_size": 63488 00:09:24.810 }, 00:09:24.810 { 00:09:24.810 "name": "BaseBdev3", 00:09:24.810 "uuid": "09736e02-eadd-5c34-9c37-da5a1000ff8b", 00:09:24.810 "is_configured": true, 00:09:24.810 "data_offset": 2048, 00:09:24.810 "data_size": 63488 00:09:24.810 } 00:09:24.810 ] 00:09:24.810 }' 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.810 13:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.379 13:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.379 13:25:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.379 [2024-11-18 13:25:55.304397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.320 "name": "raid_bdev1", 00:09:26.320 "uuid": "f330bb69-aece-43e3-9e11-342cee121d3b", 00:09:26.320 "strip_size_kb": 64, 00:09:26.320 "state": "online", 00:09:26.320 "raid_level": "raid0", 00:09:26.320 "superblock": true, 00:09:26.320 "num_base_bdevs": 3, 00:09:26.320 "num_base_bdevs_discovered": 3, 00:09:26.320 "num_base_bdevs_operational": 3, 00:09:26.320 "base_bdevs_list": [ 00:09:26.320 { 00:09:26.320 "name": "BaseBdev1", 00:09:26.320 "uuid": "db4930a6-2cf0-5dea-ab05-cb245692a4bf", 00:09:26.320 "is_configured": true, 00:09:26.320 "data_offset": 2048, 00:09:26.320 "data_size": 63488 00:09:26.320 }, 00:09:26.320 { 00:09:26.320 "name": "BaseBdev2", 00:09:26.320 "uuid": "96e8baf9-2339-525c-83ec-5357d6b6b51e", 00:09:26.320 "is_configured": true, 00:09:26.320 "data_offset": 2048, 00:09:26.320 "data_size": 63488 00:09:26.320 }, 00:09:26.320 { 00:09:26.320 "name": "BaseBdev3", 00:09:26.320 "uuid": "09736e02-eadd-5c34-9c37-da5a1000ff8b", 00:09:26.320 "is_configured": true, 00:09:26.320 "data_offset": 2048, 00:09:26.320 "data_size": 63488 00:09:26.320 } 00:09:26.320 ] 00:09:26.320 }' 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.320 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.892 [2024-11-18 13:25:56.648053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.892 [2024-11-18 13:25:56.648200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.892 [2024-11-18 13:25:56.650669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.892 [2024-11-18 13:25:56.650751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.892 [2024-11-18 13:25:56.650805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.892 [2024-11-18 13:25:56.650860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:26.892 { 00:09:26.892 "results": [ 00:09:26.892 { 00:09:26.892 "job": "raid_bdev1", 00:09:26.892 "core_mask": "0x1", 00:09:26.892 "workload": "randrw", 00:09:26.892 "percentage": 50, 00:09:26.892 "status": "finished", 00:09:26.892 "queue_depth": 1, 00:09:26.892 "io_size": 131072, 00:09:26.892 "runtime": 1.344595, 00:09:26.892 "iops": 16552.93973278199, 00:09:26.892 "mibps": 2069.1174665977487, 00:09:26.892 "io_failed": 1, 00:09:26.892 "io_timeout": 0, 00:09:26.892 "avg_latency_us": 84.10695719629388, 00:09:26.892 "min_latency_us": 24.370305676855896, 00:09:26.892 "max_latency_us": 1373.6803493449781 00:09:26.892 } 00:09:26.892 ], 00:09:26.892 "core_count": 1 00:09:26.892 } 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65490 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65490 ']' 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65490 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65490 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.892 killing process with pid 65490 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65490' 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65490 00:09:26.892 [2024-11-18 13:25:56.702384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.892 13:25:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65490 00:09:26.892 [2024-11-18 13:25:56.924513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZX9tgTE0s5 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:28.361 ************************************ 00:09:28.361 END TEST raid_write_error_test 00:09:28.361 ************************************ 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:28.361 00:09:28.361 real 0m4.467s 00:09:28.361 user 0m5.301s 00:09:28.361 sys 0m0.601s 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.361 13:25:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.361 13:25:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:28.361 13:25:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:28.362 13:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.362 13:25:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.362 13:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.362 ************************************ 00:09:28.362 START TEST raid_state_function_test 00:09:28.362 ************************************ 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65628 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65628' 00:09:28.362 Process raid pid: 65628 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65628 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65628 ']' 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.362 13:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.362 [2024-11-18 13:25:58.234253] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:28.362 [2024-11-18 13:25:58.234391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.362 [2024-11-18 13:25:58.412652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.622 [2024-11-18 13:25:58.528776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.882 [2024-11-18 13:25:58.736734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.882 [2024-11-18 13:25:58.736846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 [2024-11-18 13:25:59.135186] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.143 [2024-11-18 13:25:59.135248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.143 [2024-11-18 13:25:59.135258] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.143 [2024-11-18 13:25:59.135268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.143 [2024-11-18 13:25:59.135274] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.143 [2024-11-18 13:25:59.135283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.143 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.143 "name": "Existed_Raid", 00:09:29.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.143 "strip_size_kb": 64, 00:09:29.143 "state": "configuring", 00:09:29.143 "raid_level": "concat", 00:09:29.143 "superblock": false, 00:09:29.143 "num_base_bdevs": 3, 00:09:29.143 "num_base_bdevs_discovered": 0, 00:09:29.143 "num_base_bdevs_operational": 3, 00:09:29.143 "base_bdevs_list": [ 00:09:29.143 { 00:09:29.143 "name": "BaseBdev1", 00:09:29.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.143 "is_configured": false, 00:09:29.143 "data_offset": 0, 00:09:29.143 "data_size": 0 00:09:29.143 }, 00:09:29.144 { 00:09:29.144 "name": "BaseBdev2", 00:09:29.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.144 "is_configured": false, 00:09:29.144 "data_offset": 0, 00:09:29.144 "data_size": 0 00:09:29.144 }, 00:09:29.144 { 00:09:29.144 "name": "BaseBdev3", 00:09:29.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.144 "is_configured": false, 00:09:29.144 "data_offset": 0, 00:09:29.144 "data_size": 0 00:09:29.144 } 00:09:29.144 ] 00:09:29.144 }' 00:09:29.144 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.144 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 [2024-11-18 13:25:59.578416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.714 [2024-11-18 13:25:59.578543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 [2024-11-18 13:25:59.590365] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.714 [2024-11-18 13:25:59.590454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.714 [2024-11-18 13:25:59.590483] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.714 [2024-11-18 13:25:59.590507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.714 [2024-11-18 13:25:59.590525] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.714 [2024-11-18 13:25:59.590547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 [2024-11-18 13:25:59.638352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.714 BaseBdev1 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.714 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.714 [ 00:09:29.714 { 00:09:29.714 "name": "BaseBdev1", 00:09:29.714 "aliases": [ 00:09:29.714 "f6649607-2f47-4788-950f-34c52f5942a9" 00:09:29.714 ], 00:09:29.714 "product_name": "Malloc disk", 00:09:29.714 "block_size": 512, 00:09:29.714 "num_blocks": 65536, 00:09:29.714 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:29.714 "assigned_rate_limits": { 00:09:29.714 "rw_ios_per_sec": 0, 00:09:29.714 "rw_mbytes_per_sec": 0, 00:09:29.714 "r_mbytes_per_sec": 0, 00:09:29.714 "w_mbytes_per_sec": 0 00:09:29.714 }, 00:09:29.714 "claimed": true, 00:09:29.714 "claim_type": "exclusive_write", 00:09:29.714 "zoned": false, 00:09:29.714 "supported_io_types": { 00:09:29.714 "read": true, 00:09:29.714 "write": true, 00:09:29.714 "unmap": true, 00:09:29.714 "flush": true, 00:09:29.714 "reset": true, 00:09:29.714 "nvme_admin": false, 00:09:29.714 "nvme_io": false, 00:09:29.714 "nvme_io_md": false, 00:09:29.714 "write_zeroes": true, 00:09:29.714 "zcopy": true, 00:09:29.714 "get_zone_info": false, 00:09:29.714 "zone_management": false, 00:09:29.714 "zone_append": false, 00:09:29.714 "compare": false, 00:09:29.714 "compare_and_write": false, 00:09:29.714 "abort": true, 00:09:29.714 "seek_hole": false, 00:09:29.714 "seek_data": false, 00:09:29.714 "copy": true, 00:09:29.714 "nvme_iov_md": false 00:09:29.714 }, 00:09:29.714 "memory_domains": [ 00:09:29.714 { 00:09:29.714 "dma_device_id": "system", 00:09:29.714 "dma_device_type": 1 00:09:29.714 }, 00:09:29.714 { 00:09:29.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.715 "dma_device_type": 2 00:09:29.715 } 00:09:29.715 ], 00:09:29.715 "driver_specific": {} 00:09:29.715 } 00:09:29.715 ] 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.715 "name": "Existed_Raid", 00:09:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.715 "strip_size_kb": 64, 00:09:29.715 "state": "configuring", 00:09:29.715 "raid_level": "concat", 00:09:29.715 "superblock": false, 00:09:29.715 "num_base_bdevs": 3, 00:09:29.715 "num_base_bdevs_discovered": 1, 00:09:29.715 "num_base_bdevs_operational": 3, 00:09:29.715 "base_bdevs_list": [ 00:09:29.715 { 00:09:29.715 "name": "BaseBdev1", 00:09:29.715 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:29.715 "is_configured": true, 00:09:29.715 "data_offset": 0, 00:09:29.715 "data_size": 65536 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "name": "BaseBdev2", 00:09:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.715 "is_configured": false, 00:09:29.715 "data_offset": 0, 00:09:29.715 "data_size": 0 00:09:29.715 }, 00:09:29.715 { 00:09:29.715 "name": "BaseBdev3", 00:09:29.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.715 "is_configured": false, 00:09:29.715 "data_offset": 0, 00:09:29.715 "data_size": 0 00:09:29.715 } 00:09:29.715 ] 00:09:29.715 }' 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.715 13:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.285 [2024-11-18 13:26:00.133536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.285 [2024-11-18 13:26:00.133593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.285 [2024-11-18 13:26:00.141557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.285 [2024-11-18 13:26:00.143418] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.285 [2024-11-18 13:26:00.143480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.285 [2024-11-18 13:26:00.143542] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.285 [2024-11-18 13:26:00.143566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.285 "name": "Existed_Raid", 00:09:30.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.285 "strip_size_kb": 64, 00:09:30.285 "state": "configuring", 00:09:30.285 "raid_level": "concat", 00:09:30.285 "superblock": false, 00:09:30.285 "num_base_bdevs": 3, 00:09:30.285 "num_base_bdevs_discovered": 1, 00:09:30.285 "num_base_bdevs_operational": 3, 00:09:30.285 "base_bdevs_list": [ 00:09:30.285 { 00:09:30.285 "name": "BaseBdev1", 00:09:30.285 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:30.285 "is_configured": true, 00:09:30.285 "data_offset": 0, 00:09:30.285 "data_size": 65536 00:09:30.285 }, 00:09:30.285 { 00:09:30.285 "name": "BaseBdev2", 00:09:30.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.285 "is_configured": false, 00:09:30.285 "data_offset": 0, 00:09:30.285 "data_size": 0 00:09:30.285 }, 00:09:30.285 { 00:09:30.285 "name": "BaseBdev3", 00:09:30.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.285 "is_configured": false, 00:09:30.285 "data_offset": 0, 00:09:30.285 "data_size": 0 00:09:30.285 } 00:09:30.285 ] 00:09:30.285 }' 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.285 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.545 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.545 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.545 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 [2024-11-18 13:26:00.627399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.806 BaseBdev2 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 [ 00:09:30.806 { 00:09:30.806 "name": "BaseBdev2", 00:09:30.806 "aliases": [ 00:09:30.806 "fd35a9f7-3659-4370-b3b1-8a162f0c60c8" 00:09:30.806 ], 00:09:30.806 "product_name": "Malloc disk", 00:09:30.806 "block_size": 512, 00:09:30.806 "num_blocks": 65536, 00:09:30.806 "uuid": "fd35a9f7-3659-4370-b3b1-8a162f0c60c8", 00:09:30.806 "assigned_rate_limits": { 00:09:30.806 "rw_ios_per_sec": 0, 00:09:30.806 "rw_mbytes_per_sec": 0, 00:09:30.806 "r_mbytes_per_sec": 0, 00:09:30.806 "w_mbytes_per_sec": 0 00:09:30.806 }, 00:09:30.806 "claimed": true, 00:09:30.806 "claim_type": "exclusive_write", 00:09:30.806 "zoned": false, 00:09:30.806 "supported_io_types": { 00:09:30.806 "read": true, 00:09:30.806 "write": true, 00:09:30.806 "unmap": true, 00:09:30.806 "flush": true, 00:09:30.806 "reset": true, 00:09:30.806 "nvme_admin": false, 00:09:30.806 "nvme_io": false, 00:09:30.806 "nvme_io_md": false, 00:09:30.806 "write_zeroes": true, 00:09:30.806 "zcopy": true, 00:09:30.806 "get_zone_info": false, 00:09:30.806 "zone_management": false, 00:09:30.806 "zone_append": false, 00:09:30.806 "compare": false, 00:09:30.806 "compare_and_write": false, 00:09:30.806 "abort": true, 00:09:30.806 "seek_hole": false, 00:09:30.806 "seek_data": false, 00:09:30.806 "copy": true, 00:09:30.806 "nvme_iov_md": false 00:09:30.806 }, 00:09:30.806 "memory_domains": [ 00:09:30.806 { 00:09:30.806 "dma_device_id": "system", 00:09:30.806 "dma_device_type": 1 00:09:30.806 }, 00:09:30.806 { 00:09:30.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.806 "dma_device_type": 2 00:09:30.806 } 00:09:30.806 ], 00:09:30.806 "driver_specific": {} 00:09:30.806 } 00:09:30.806 ] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.806 "name": "Existed_Raid", 00:09:30.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.806 "strip_size_kb": 64, 00:09:30.806 "state": "configuring", 00:09:30.806 "raid_level": "concat", 00:09:30.806 "superblock": false, 00:09:30.806 "num_base_bdevs": 3, 00:09:30.806 "num_base_bdevs_discovered": 2, 00:09:30.806 "num_base_bdevs_operational": 3, 00:09:30.806 "base_bdevs_list": [ 00:09:30.806 { 00:09:30.806 "name": "BaseBdev1", 00:09:30.806 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:30.806 "is_configured": true, 00:09:30.806 "data_offset": 0, 00:09:30.806 "data_size": 65536 00:09:30.806 }, 00:09:30.806 { 00:09:30.806 "name": "BaseBdev2", 00:09:30.806 "uuid": "fd35a9f7-3659-4370-b3b1-8a162f0c60c8", 00:09:30.806 "is_configured": true, 00:09:30.806 "data_offset": 0, 00:09:30.806 "data_size": 65536 00:09:30.806 }, 00:09:30.806 { 00:09:30.806 "name": "BaseBdev3", 00:09:30.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.806 "is_configured": false, 00:09:30.806 "data_offset": 0, 00:09:30.806 "data_size": 0 00:09:30.806 } 00:09:30.806 ] 00:09:30.806 }' 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.806 13:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 [2024-11-18 13:26:01.089537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.067 [2024-11-18 13:26:01.089586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.067 [2024-11-18 13:26:01.089599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:31.067 [2024-11-18 13:26:01.089856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:31.067 [2024-11-18 13:26:01.090013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.067 [2024-11-18 13:26:01.090023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:31.067 [2024-11-18 13:26:01.090308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.067 BaseBdev3 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 [ 00:09:31.067 { 00:09:31.067 "name": "BaseBdev3", 00:09:31.067 "aliases": [ 00:09:31.067 "379e0bbe-724a-48b6-841f-dc7913f49b9a" 00:09:31.067 ], 00:09:31.067 "product_name": "Malloc disk", 00:09:31.067 "block_size": 512, 00:09:31.067 "num_blocks": 65536, 00:09:31.067 "uuid": "379e0bbe-724a-48b6-841f-dc7913f49b9a", 00:09:31.067 "assigned_rate_limits": { 00:09:31.067 "rw_ios_per_sec": 0, 00:09:31.067 "rw_mbytes_per_sec": 0, 00:09:31.067 "r_mbytes_per_sec": 0, 00:09:31.067 "w_mbytes_per_sec": 0 00:09:31.067 }, 00:09:31.067 "claimed": true, 00:09:31.067 "claim_type": "exclusive_write", 00:09:31.067 "zoned": false, 00:09:31.067 "supported_io_types": { 00:09:31.067 "read": true, 00:09:31.067 "write": true, 00:09:31.067 "unmap": true, 00:09:31.067 "flush": true, 00:09:31.067 "reset": true, 00:09:31.067 "nvme_admin": false, 00:09:31.067 "nvme_io": false, 00:09:31.067 "nvme_io_md": false, 00:09:31.067 "write_zeroes": true, 00:09:31.067 "zcopy": true, 00:09:31.067 "get_zone_info": false, 00:09:31.067 "zone_management": false, 00:09:31.067 "zone_append": false, 00:09:31.067 "compare": false, 00:09:31.067 "compare_and_write": false, 00:09:31.067 "abort": true, 00:09:31.067 "seek_hole": false, 00:09:31.327 "seek_data": false, 00:09:31.327 "copy": true, 00:09:31.327 "nvme_iov_md": false 00:09:31.327 }, 00:09:31.327 "memory_domains": [ 00:09:31.327 { 00:09:31.327 "dma_device_id": "system", 00:09:31.327 "dma_device_type": 1 00:09:31.327 }, 00:09:31.327 { 00:09:31.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.327 "dma_device_type": 2 00:09:31.327 } 00:09:31.327 ], 00:09:31.327 "driver_specific": {} 00:09:31.327 } 00:09:31.327 ] 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.327 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.327 "name": "Existed_Raid", 00:09:31.327 "uuid": "957be04b-a64f-4ca4-b3f2-ffec4d66e92f", 00:09:31.327 "strip_size_kb": 64, 00:09:31.328 "state": "online", 00:09:31.328 "raid_level": "concat", 00:09:31.328 "superblock": false, 00:09:31.328 "num_base_bdevs": 3, 00:09:31.328 "num_base_bdevs_discovered": 3, 00:09:31.328 "num_base_bdevs_operational": 3, 00:09:31.328 "base_bdevs_list": [ 00:09:31.328 { 00:09:31.328 "name": "BaseBdev1", 00:09:31.328 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:31.328 "is_configured": true, 00:09:31.328 "data_offset": 0, 00:09:31.328 "data_size": 65536 00:09:31.328 }, 00:09:31.328 { 00:09:31.328 "name": "BaseBdev2", 00:09:31.328 "uuid": "fd35a9f7-3659-4370-b3b1-8a162f0c60c8", 00:09:31.328 "is_configured": true, 00:09:31.328 "data_offset": 0, 00:09:31.328 "data_size": 65536 00:09:31.328 }, 00:09:31.328 { 00:09:31.328 "name": "BaseBdev3", 00:09:31.328 "uuid": "379e0bbe-724a-48b6-841f-dc7913f49b9a", 00:09:31.328 "is_configured": true, 00:09:31.328 "data_offset": 0, 00:09:31.328 "data_size": 65536 00:09:31.328 } 00:09:31.328 ] 00:09:31.328 }' 00:09:31.328 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.328 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.588 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.589 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.589 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.589 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 [2024-11-18 13:26:01.569012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.589 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.589 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.589 "name": "Existed_Raid", 00:09:31.589 "aliases": [ 00:09:31.589 "957be04b-a64f-4ca4-b3f2-ffec4d66e92f" 00:09:31.589 ], 00:09:31.589 "product_name": "Raid Volume", 00:09:31.589 "block_size": 512, 00:09:31.589 "num_blocks": 196608, 00:09:31.589 "uuid": "957be04b-a64f-4ca4-b3f2-ffec4d66e92f", 00:09:31.589 "assigned_rate_limits": { 00:09:31.589 "rw_ios_per_sec": 0, 00:09:31.589 "rw_mbytes_per_sec": 0, 00:09:31.589 "r_mbytes_per_sec": 0, 00:09:31.589 "w_mbytes_per_sec": 0 00:09:31.589 }, 00:09:31.589 "claimed": false, 00:09:31.589 "zoned": false, 00:09:31.589 "supported_io_types": { 00:09:31.589 "read": true, 00:09:31.589 "write": true, 00:09:31.589 "unmap": true, 00:09:31.589 "flush": true, 00:09:31.589 "reset": true, 00:09:31.589 "nvme_admin": false, 00:09:31.589 "nvme_io": false, 00:09:31.589 "nvme_io_md": false, 00:09:31.589 "write_zeroes": true, 00:09:31.589 "zcopy": false, 00:09:31.589 "get_zone_info": false, 00:09:31.589 "zone_management": false, 00:09:31.589 "zone_append": false, 00:09:31.589 "compare": false, 00:09:31.589 "compare_and_write": false, 00:09:31.589 "abort": false, 00:09:31.589 "seek_hole": false, 00:09:31.589 "seek_data": false, 00:09:31.589 "copy": false, 00:09:31.589 "nvme_iov_md": false 00:09:31.589 }, 00:09:31.589 "memory_domains": [ 00:09:31.589 { 00:09:31.589 "dma_device_id": "system", 00:09:31.589 "dma_device_type": 1 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.589 "dma_device_type": 2 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "dma_device_id": "system", 00:09:31.589 "dma_device_type": 1 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.589 "dma_device_type": 2 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "dma_device_id": "system", 00:09:31.589 "dma_device_type": 1 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.589 "dma_device_type": 2 00:09:31.589 } 00:09:31.589 ], 00:09:31.589 "driver_specific": { 00:09:31.589 "raid": { 00:09:31.589 "uuid": "957be04b-a64f-4ca4-b3f2-ffec4d66e92f", 00:09:31.589 "strip_size_kb": 64, 00:09:31.589 "state": "online", 00:09:31.589 "raid_level": "concat", 00:09:31.589 "superblock": false, 00:09:31.589 "num_base_bdevs": 3, 00:09:31.589 "num_base_bdevs_discovered": 3, 00:09:31.589 "num_base_bdevs_operational": 3, 00:09:31.589 "base_bdevs_list": [ 00:09:31.589 { 00:09:31.589 "name": "BaseBdev1", 00:09:31.589 "uuid": "f6649607-2f47-4788-950f-34c52f5942a9", 00:09:31.589 "is_configured": true, 00:09:31.589 "data_offset": 0, 00:09:31.589 "data_size": 65536 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "name": "BaseBdev2", 00:09:31.589 "uuid": "fd35a9f7-3659-4370-b3b1-8a162f0c60c8", 00:09:31.589 "is_configured": true, 00:09:31.589 "data_offset": 0, 00:09:31.589 "data_size": 65536 00:09:31.589 }, 00:09:31.589 { 00:09:31.589 "name": "BaseBdev3", 00:09:31.589 "uuid": "379e0bbe-724a-48b6-841f-dc7913f49b9a", 00:09:31.589 "is_configured": true, 00:09:31.589 "data_offset": 0, 00:09:31.590 "data_size": 65536 00:09:31.590 } 00:09:31.590 ] 00:09:31.590 } 00:09:31.590 } 00:09:31.590 }' 00:09:31.590 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.850 BaseBdev2 00:09:31.850 BaseBdev3' 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.850 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.851 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.851 [2024-11-18 13:26:01.868333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.851 [2024-11-18 13:26:01.868372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.851 [2024-11-18 13:26:01.868425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.111 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.112 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 13:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.112 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.112 "name": "Existed_Raid", 00:09:32.112 "uuid": "957be04b-a64f-4ca4-b3f2-ffec4d66e92f", 00:09:32.112 "strip_size_kb": 64, 00:09:32.112 "state": "offline", 00:09:32.112 "raid_level": "concat", 00:09:32.112 "superblock": false, 00:09:32.112 "num_base_bdevs": 3, 00:09:32.112 "num_base_bdevs_discovered": 2, 00:09:32.112 "num_base_bdevs_operational": 2, 00:09:32.112 "base_bdevs_list": [ 00:09:32.112 { 00:09:32.112 "name": null, 00:09:32.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.112 "is_configured": false, 00:09:32.112 "data_offset": 0, 00:09:32.112 "data_size": 65536 00:09:32.112 }, 00:09:32.112 { 00:09:32.112 "name": "BaseBdev2", 00:09:32.112 "uuid": "fd35a9f7-3659-4370-b3b1-8a162f0c60c8", 00:09:32.112 "is_configured": true, 00:09:32.112 "data_offset": 0, 00:09:32.112 "data_size": 65536 00:09:32.112 }, 00:09:32.112 { 00:09:32.112 "name": "BaseBdev3", 00:09:32.112 "uuid": "379e0bbe-724a-48b6-841f-dc7913f49b9a", 00:09:32.112 "is_configured": true, 00:09:32.112 "data_offset": 0, 00:09:32.112 "data_size": 65536 00:09:32.112 } 00:09:32.112 ] 00:09:32.112 }' 00:09:32.112 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.112 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.372 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.632 [2024-11-18 13:26:02.468809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.632 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.632 [2024-11-18 13:26:02.616324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.632 [2024-11-18 13:26:02.616384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 BaseBdev2 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 [ 00:09:32.893 { 00:09:32.893 "name": "BaseBdev2", 00:09:32.893 "aliases": [ 00:09:32.893 "8a00e372-2681-4f41-a2ed-2fee5429514a" 00:09:32.893 ], 00:09:32.893 "product_name": "Malloc disk", 00:09:32.893 "block_size": 512, 00:09:32.893 "num_blocks": 65536, 00:09:32.893 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:32.893 "assigned_rate_limits": { 00:09:32.893 "rw_ios_per_sec": 0, 00:09:32.893 "rw_mbytes_per_sec": 0, 00:09:32.893 "r_mbytes_per_sec": 0, 00:09:32.893 "w_mbytes_per_sec": 0 00:09:32.893 }, 00:09:32.893 "claimed": false, 00:09:32.893 "zoned": false, 00:09:32.893 "supported_io_types": { 00:09:32.893 "read": true, 00:09:32.893 "write": true, 00:09:32.893 "unmap": true, 00:09:32.893 "flush": true, 00:09:32.893 "reset": true, 00:09:32.893 "nvme_admin": false, 00:09:32.893 "nvme_io": false, 00:09:32.893 "nvme_io_md": false, 00:09:32.893 "write_zeroes": true, 00:09:32.893 "zcopy": true, 00:09:32.893 "get_zone_info": false, 00:09:32.893 "zone_management": false, 00:09:32.893 "zone_append": false, 00:09:32.893 "compare": false, 00:09:32.893 "compare_and_write": false, 00:09:32.893 "abort": true, 00:09:32.893 "seek_hole": false, 00:09:32.893 "seek_data": false, 00:09:32.893 "copy": true, 00:09:32.893 "nvme_iov_md": false 00:09:32.893 }, 00:09:32.893 "memory_domains": [ 00:09:32.893 { 00:09:32.893 "dma_device_id": "system", 00:09:32.893 "dma_device_type": 1 00:09:32.893 }, 00:09:32.893 { 00:09:32.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.893 "dma_device_type": 2 00:09:32.893 } 00:09:32.893 ], 00:09:32.893 "driver_specific": {} 00:09:32.893 } 00:09:32.893 ] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.893 BaseBdev3 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.893 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.894 [ 00:09:32.894 { 00:09:32.894 "name": "BaseBdev3", 00:09:32.894 "aliases": [ 00:09:32.894 "1cea574e-894d-4344-a220-82b07177d804" 00:09:32.894 ], 00:09:32.894 "product_name": "Malloc disk", 00:09:32.894 "block_size": 512, 00:09:32.894 "num_blocks": 65536, 00:09:32.894 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:32.894 "assigned_rate_limits": { 00:09:32.894 "rw_ios_per_sec": 0, 00:09:32.894 "rw_mbytes_per_sec": 0, 00:09:32.894 "r_mbytes_per_sec": 0, 00:09:32.894 "w_mbytes_per_sec": 0 00:09:32.894 }, 00:09:32.894 "claimed": false, 00:09:32.894 "zoned": false, 00:09:32.894 "supported_io_types": { 00:09:32.894 "read": true, 00:09:32.894 "write": true, 00:09:32.894 "unmap": true, 00:09:32.894 "flush": true, 00:09:32.894 "reset": true, 00:09:32.894 "nvme_admin": false, 00:09:32.894 "nvme_io": false, 00:09:32.894 "nvme_io_md": false, 00:09:32.894 "write_zeroes": true, 00:09:32.894 "zcopy": true, 00:09:32.894 "get_zone_info": false, 00:09:32.894 "zone_management": false, 00:09:32.894 "zone_append": false, 00:09:32.894 "compare": false, 00:09:32.894 "compare_and_write": false, 00:09:32.894 "abort": true, 00:09:32.894 "seek_hole": false, 00:09:32.894 "seek_data": false, 00:09:32.894 "copy": true, 00:09:32.894 "nvme_iov_md": false 00:09:32.894 }, 00:09:32.894 "memory_domains": [ 00:09:32.894 { 00:09:32.894 "dma_device_id": "system", 00:09:32.894 "dma_device_type": 1 00:09:32.894 }, 00:09:32.894 { 00:09:32.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.894 "dma_device_type": 2 00:09:32.894 } 00:09:32.894 ], 00:09:32.894 "driver_specific": {} 00:09:32.894 } 00:09:32.894 ] 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.894 [2024-11-18 13:26:02.906683] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.894 [2024-11-18 13:26:02.906822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.894 [2024-11-18 13:26:02.906868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.894 [2024-11-18 13:26:02.908590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.894 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.155 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.155 "name": "Existed_Raid", 00:09:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.155 "strip_size_kb": 64, 00:09:33.155 "state": "configuring", 00:09:33.155 "raid_level": "concat", 00:09:33.155 "superblock": false, 00:09:33.155 "num_base_bdevs": 3, 00:09:33.155 "num_base_bdevs_discovered": 2, 00:09:33.155 "num_base_bdevs_operational": 3, 00:09:33.155 "base_bdevs_list": [ 00:09:33.155 { 00:09:33.155 "name": "BaseBdev1", 00:09:33.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.155 "is_configured": false, 00:09:33.155 "data_offset": 0, 00:09:33.155 "data_size": 0 00:09:33.155 }, 00:09:33.155 { 00:09:33.155 "name": "BaseBdev2", 00:09:33.155 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:33.155 "is_configured": true, 00:09:33.155 "data_offset": 0, 00:09:33.155 "data_size": 65536 00:09:33.155 }, 00:09:33.155 { 00:09:33.155 "name": "BaseBdev3", 00:09:33.155 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:33.155 "is_configured": true, 00:09:33.155 "data_offset": 0, 00:09:33.155 "data_size": 65536 00:09:33.155 } 00:09:33.155 ] 00:09:33.155 }' 00:09:33.155 13:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.155 13:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.416 [2024-11-18 13:26:03.341991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.416 "name": "Existed_Raid", 00:09:33.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.416 "strip_size_kb": 64, 00:09:33.416 "state": "configuring", 00:09:33.416 "raid_level": "concat", 00:09:33.416 "superblock": false, 00:09:33.416 "num_base_bdevs": 3, 00:09:33.416 "num_base_bdevs_discovered": 1, 00:09:33.416 "num_base_bdevs_operational": 3, 00:09:33.416 "base_bdevs_list": [ 00:09:33.416 { 00:09:33.416 "name": "BaseBdev1", 00:09:33.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.416 "is_configured": false, 00:09:33.416 "data_offset": 0, 00:09:33.416 "data_size": 0 00:09:33.416 }, 00:09:33.416 { 00:09:33.416 "name": null, 00:09:33.416 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:33.416 "is_configured": false, 00:09:33.416 "data_offset": 0, 00:09:33.416 "data_size": 65536 00:09:33.416 }, 00:09:33.416 { 00:09:33.416 "name": "BaseBdev3", 00:09:33.416 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:33.416 "is_configured": true, 00:09:33.416 "data_offset": 0, 00:09:33.416 "data_size": 65536 00:09:33.416 } 00:09:33.416 ] 00:09:33.416 }' 00:09:33.416 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.417 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 [2024-11-18 13:26:03.812607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.988 BaseBdev1 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.988 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.988 [ 00:09:33.988 { 00:09:33.988 "name": "BaseBdev1", 00:09:33.988 "aliases": [ 00:09:33.988 "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0" 00:09:33.988 ], 00:09:33.988 "product_name": "Malloc disk", 00:09:33.988 "block_size": 512, 00:09:33.988 "num_blocks": 65536, 00:09:33.988 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:33.988 "assigned_rate_limits": { 00:09:33.988 "rw_ios_per_sec": 0, 00:09:33.988 "rw_mbytes_per_sec": 0, 00:09:33.988 "r_mbytes_per_sec": 0, 00:09:33.988 "w_mbytes_per_sec": 0 00:09:33.988 }, 00:09:33.988 "claimed": true, 00:09:33.988 "claim_type": "exclusive_write", 00:09:33.988 "zoned": false, 00:09:33.988 "supported_io_types": { 00:09:33.988 "read": true, 00:09:33.988 "write": true, 00:09:33.988 "unmap": true, 00:09:33.988 "flush": true, 00:09:33.988 "reset": true, 00:09:33.988 "nvme_admin": false, 00:09:33.988 "nvme_io": false, 00:09:33.988 "nvme_io_md": false, 00:09:33.988 "write_zeroes": true, 00:09:33.988 "zcopy": true, 00:09:33.988 "get_zone_info": false, 00:09:33.988 "zone_management": false, 00:09:33.988 "zone_append": false, 00:09:33.988 "compare": false, 00:09:33.988 "compare_and_write": false, 00:09:33.988 "abort": true, 00:09:33.988 "seek_hole": false, 00:09:33.988 "seek_data": false, 00:09:33.988 "copy": true, 00:09:33.988 "nvme_iov_md": false 00:09:33.988 }, 00:09:33.988 "memory_domains": [ 00:09:33.988 { 00:09:33.988 "dma_device_id": "system", 00:09:33.988 "dma_device_type": 1 00:09:33.989 }, 00:09:33.989 { 00:09:33.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.989 "dma_device_type": 2 00:09:33.989 } 00:09:33.989 ], 00:09:33.989 "driver_specific": {} 00:09:33.989 } 00:09:33.989 ] 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.989 "name": "Existed_Raid", 00:09:33.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.989 "strip_size_kb": 64, 00:09:33.989 "state": "configuring", 00:09:33.989 "raid_level": "concat", 00:09:33.989 "superblock": false, 00:09:33.989 "num_base_bdevs": 3, 00:09:33.989 "num_base_bdevs_discovered": 2, 00:09:33.989 "num_base_bdevs_operational": 3, 00:09:33.989 "base_bdevs_list": [ 00:09:33.989 { 00:09:33.989 "name": "BaseBdev1", 00:09:33.989 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:33.989 "is_configured": true, 00:09:33.989 "data_offset": 0, 00:09:33.989 "data_size": 65536 00:09:33.989 }, 00:09:33.989 { 00:09:33.989 "name": null, 00:09:33.989 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:33.989 "is_configured": false, 00:09:33.989 "data_offset": 0, 00:09:33.989 "data_size": 65536 00:09:33.989 }, 00:09:33.989 { 00:09:33.989 "name": "BaseBdev3", 00:09:33.989 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:33.989 "is_configured": true, 00:09:33.989 "data_offset": 0, 00:09:33.989 "data_size": 65536 00:09:33.989 } 00:09:33.989 ] 00:09:33.989 }' 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.989 13:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.560 [2024-11-18 13:26:04.375711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.560 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.560 "name": "Existed_Raid", 00:09:34.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.560 "strip_size_kb": 64, 00:09:34.560 "state": "configuring", 00:09:34.560 "raid_level": "concat", 00:09:34.560 "superblock": false, 00:09:34.560 "num_base_bdevs": 3, 00:09:34.560 "num_base_bdevs_discovered": 1, 00:09:34.560 "num_base_bdevs_operational": 3, 00:09:34.560 "base_bdevs_list": [ 00:09:34.560 { 00:09:34.560 "name": "BaseBdev1", 00:09:34.560 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:34.560 "is_configured": true, 00:09:34.560 "data_offset": 0, 00:09:34.560 "data_size": 65536 00:09:34.560 }, 00:09:34.560 { 00:09:34.560 "name": null, 00:09:34.560 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:34.560 "is_configured": false, 00:09:34.560 "data_offset": 0, 00:09:34.560 "data_size": 65536 00:09:34.560 }, 00:09:34.560 { 00:09:34.561 "name": null, 00:09:34.561 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:34.561 "is_configured": false, 00:09:34.561 "data_offset": 0, 00:09:34.561 "data_size": 65536 00:09:34.561 } 00:09:34.561 ] 00:09:34.561 }' 00:09:34.561 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.561 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.821 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.081 [2024-11-18 13:26:04.874950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.081 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.082 "name": "Existed_Raid", 00:09:35.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.082 "strip_size_kb": 64, 00:09:35.082 "state": "configuring", 00:09:35.082 "raid_level": "concat", 00:09:35.082 "superblock": false, 00:09:35.082 "num_base_bdevs": 3, 00:09:35.082 "num_base_bdevs_discovered": 2, 00:09:35.082 "num_base_bdevs_operational": 3, 00:09:35.082 "base_bdevs_list": [ 00:09:35.082 { 00:09:35.082 "name": "BaseBdev1", 00:09:35.082 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:35.082 "is_configured": true, 00:09:35.082 "data_offset": 0, 00:09:35.082 "data_size": 65536 00:09:35.082 }, 00:09:35.082 { 00:09:35.082 "name": null, 00:09:35.082 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:35.082 "is_configured": false, 00:09:35.082 "data_offset": 0, 00:09:35.082 "data_size": 65536 00:09:35.082 }, 00:09:35.082 { 00:09:35.082 "name": "BaseBdev3", 00:09:35.082 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:35.082 "is_configured": true, 00:09:35.082 "data_offset": 0, 00:09:35.082 "data_size": 65536 00:09:35.082 } 00:09:35.082 ] 00:09:35.082 }' 00:09:35.082 13:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.082 13:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.342 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.342 [2024-11-18 13:26:05.362101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.602 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.602 "name": "Existed_Raid", 00:09:35.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.602 "strip_size_kb": 64, 00:09:35.602 "state": "configuring", 00:09:35.602 "raid_level": "concat", 00:09:35.602 "superblock": false, 00:09:35.602 "num_base_bdevs": 3, 00:09:35.602 "num_base_bdevs_discovered": 1, 00:09:35.602 "num_base_bdevs_operational": 3, 00:09:35.602 "base_bdevs_list": [ 00:09:35.602 { 00:09:35.602 "name": null, 00:09:35.602 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:35.602 "is_configured": false, 00:09:35.602 "data_offset": 0, 00:09:35.602 "data_size": 65536 00:09:35.602 }, 00:09:35.602 { 00:09:35.602 "name": null, 00:09:35.602 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:35.602 "is_configured": false, 00:09:35.602 "data_offset": 0, 00:09:35.602 "data_size": 65536 00:09:35.602 }, 00:09:35.602 { 00:09:35.602 "name": "BaseBdev3", 00:09:35.602 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:35.602 "is_configured": true, 00:09:35.603 "data_offset": 0, 00:09:35.603 "data_size": 65536 00:09:35.603 } 00:09:35.603 ] 00:09:35.603 }' 00:09:35.603 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.603 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 [2024-11-18 13:26:05.900386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.863 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.123 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.123 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.123 "name": "Existed_Raid", 00:09:36.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.123 "strip_size_kb": 64, 00:09:36.123 "state": "configuring", 00:09:36.123 "raid_level": "concat", 00:09:36.123 "superblock": false, 00:09:36.123 "num_base_bdevs": 3, 00:09:36.123 "num_base_bdevs_discovered": 2, 00:09:36.123 "num_base_bdevs_operational": 3, 00:09:36.123 "base_bdevs_list": [ 00:09:36.123 { 00:09:36.123 "name": null, 00:09:36.123 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:36.123 "is_configured": false, 00:09:36.123 "data_offset": 0, 00:09:36.123 "data_size": 65536 00:09:36.123 }, 00:09:36.123 { 00:09:36.123 "name": "BaseBdev2", 00:09:36.123 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:36.123 "is_configured": true, 00:09:36.123 "data_offset": 0, 00:09:36.123 "data_size": 65536 00:09:36.123 }, 00:09:36.123 { 00:09:36.123 "name": "BaseBdev3", 00:09:36.123 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:36.123 "is_configured": true, 00:09:36.123 "data_offset": 0, 00:09:36.123 "data_size": 65536 00:09:36.123 } 00:09:36.123 ] 00:09:36.123 }' 00:09:36.123 13:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.123 13:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.383 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.644 [2024-11-18 13:26:06.448727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.644 [2024-11-18 13:26:06.448857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:36.644 [2024-11-18 13:26:06.448883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:36.644 [2024-11-18 13:26:06.449164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.644 [2024-11-18 13:26:06.449352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:36.644 [2024-11-18 13:26:06.449393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:36.644 [2024-11-18 13:26:06.449678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.644 NewBaseBdev 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.644 [ 00:09:36.644 { 00:09:36.644 "name": "NewBaseBdev", 00:09:36.644 "aliases": [ 00:09:36.644 "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0" 00:09:36.644 ], 00:09:36.644 "product_name": "Malloc disk", 00:09:36.644 "block_size": 512, 00:09:36.644 "num_blocks": 65536, 00:09:36.644 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:36.644 "assigned_rate_limits": { 00:09:36.644 "rw_ios_per_sec": 0, 00:09:36.644 "rw_mbytes_per_sec": 0, 00:09:36.644 "r_mbytes_per_sec": 0, 00:09:36.644 "w_mbytes_per_sec": 0 00:09:36.644 }, 00:09:36.644 "claimed": true, 00:09:36.644 "claim_type": "exclusive_write", 00:09:36.644 "zoned": false, 00:09:36.644 "supported_io_types": { 00:09:36.644 "read": true, 00:09:36.644 "write": true, 00:09:36.644 "unmap": true, 00:09:36.644 "flush": true, 00:09:36.644 "reset": true, 00:09:36.644 "nvme_admin": false, 00:09:36.644 "nvme_io": false, 00:09:36.644 "nvme_io_md": false, 00:09:36.644 "write_zeroes": true, 00:09:36.644 "zcopy": true, 00:09:36.644 "get_zone_info": false, 00:09:36.644 "zone_management": false, 00:09:36.644 "zone_append": false, 00:09:36.644 "compare": false, 00:09:36.644 "compare_and_write": false, 00:09:36.644 "abort": true, 00:09:36.644 "seek_hole": false, 00:09:36.644 "seek_data": false, 00:09:36.644 "copy": true, 00:09:36.644 "nvme_iov_md": false 00:09:36.644 }, 00:09:36.644 "memory_domains": [ 00:09:36.644 { 00:09:36.644 "dma_device_id": "system", 00:09:36.644 "dma_device_type": 1 00:09:36.644 }, 00:09:36.644 { 00:09:36.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.644 "dma_device_type": 2 00:09:36.644 } 00:09:36.644 ], 00:09:36.644 "driver_specific": {} 00:09:36.644 } 00:09:36.644 ] 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.644 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.645 "name": "Existed_Raid", 00:09:36.645 "uuid": "b3fd7f68-ec98-45de-beb1-9183e56dadd7", 00:09:36.645 "strip_size_kb": 64, 00:09:36.645 "state": "online", 00:09:36.645 "raid_level": "concat", 00:09:36.645 "superblock": false, 00:09:36.645 "num_base_bdevs": 3, 00:09:36.645 "num_base_bdevs_discovered": 3, 00:09:36.645 "num_base_bdevs_operational": 3, 00:09:36.645 "base_bdevs_list": [ 00:09:36.645 { 00:09:36.645 "name": "NewBaseBdev", 00:09:36.645 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:36.645 "is_configured": true, 00:09:36.645 "data_offset": 0, 00:09:36.645 "data_size": 65536 00:09:36.645 }, 00:09:36.645 { 00:09:36.645 "name": "BaseBdev2", 00:09:36.645 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:36.645 "is_configured": true, 00:09:36.645 "data_offset": 0, 00:09:36.645 "data_size": 65536 00:09:36.645 }, 00:09:36.645 { 00:09:36.645 "name": "BaseBdev3", 00:09:36.645 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:36.645 "is_configured": true, 00:09:36.645 "data_offset": 0, 00:09:36.645 "data_size": 65536 00:09:36.645 } 00:09:36.645 ] 00:09:36.645 }' 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.645 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.905 [2024-11-18 13:26:06.932280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.905 13:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.166 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.166 "name": "Existed_Raid", 00:09:37.166 "aliases": [ 00:09:37.166 "b3fd7f68-ec98-45de-beb1-9183e56dadd7" 00:09:37.166 ], 00:09:37.166 "product_name": "Raid Volume", 00:09:37.166 "block_size": 512, 00:09:37.166 "num_blocks": 196608, 00:09:37.166 "uuid": "b3fd7f68-ec98-45de-beb1-9183e56dadd7", 00:09:37.166 "assigned_rate_limits": { 00:09:37.166 "rw_ios_per_sec": 0, 00:09:37.166 "rw_mbytes_per_sec": 0, 00:09:37.166 "r_mbytes_per_sec": 0, 00:09:37.166 "w_mbytes_per_sec": 0 00:09:37.166 }, 00:09:37.166 "claimed": false, 00:09:37.166 "zoned": false, 00:09:37.166 "supported_io_types": { 00:09:37.166 "read": true, 00:09:37.166 "write": true, 00:09:37.166 "unmap": true, 00:09:37.166 "flush": true, 00:09:37.166 "reset": true, 00:09:37.166 "nvme_admin": false, 00:09:37.166 "nvme_io": false, 00:09:37.166 "nvme_io_md": false, 00:09:37.166 "write_zeroes": true, 00:09:37.166 "zcopy": false, 00:09:37.166 "get_zone_info": false, 00:09:37.166 "zone_management": false, 00:09:37.166 "zone_append": false, 00:09:37.166 "compare": false, 00:09:37.166 "compare_and_write": false, 00:09:37.166 "abort": false, 00:09:37.166 "seek_hole": false, 00:09:37.166 "seek_data": false, 00:09:37.166 "copy": false, 00:09:37.166 "nvme_iov_md": false 00:09:37.166 }, 00:09:37.166 "memory_domains": [ 00:09:37.166 { 00:09:37.166 "dma_device_id": "system", 00:09:37.167 "dma_device_type": 1 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.167 "dma_device_type": 2 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "dma_device_id": "system", 00:09:37.167 "dma_device_type": 1 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.167 "dma_device_type": 2 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "dma_device_id": "system", 00:09:37.167 "dma_device_type": 1 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.167 "dma_device_type": 2 00:09:37.167 } 00:09:37.167 ], 00:09:37.167 "driver_specific": { 00:09:37.167 "raid": { 00:09:37.167 "uuid": "b3fd7f68-ec98-45de-beb1-9183e56dadd7", 00:09:37.167 "strip_size_kb": 64, 00:09:37.167 "state": "online", 00:09:37.167 "raid_level": "concat", 00:09:37.167 "superblock": false, 00:09:37.167 "num_base_bdevs": 3, 00:09:37.167 "num_base_bdevs_discovered": 3, 00:09:37.167 "num_base_bdevs_operational": 3, 00:09:37.167 "base_bdevs_list": [ 00:09:37.167 { 00:09:37.167 "name": "NewBaseBdev", 00:09:37.167 "uuid": "692dccb1-8b64-4dd1-bc68-5c7d9d2ec5b0", 00:09:37.167 "is_configured": true, 00:09:37.167 "data_offset": 0, 00:09:37.167 "data_size": 65536 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "name": "BaseBdev2", 00:09:37.167 "uuid": "8a00e372-2681-4f41-a2ed-2fee5429514a", 00:09:37.167 "is_configured": true, 00:09:37.167 "data_offset": 0, 00:09:37.167 "data_size": 65536 00:09:37.167 }, 00:09:37.167 { 00:09:37.167 "name": "BaseBdev3", 00:09:37.167 "uuid": "1cea574e-894d-4344-a220-82b07177d804", 00:09:37.167 "is_configured": true, 00:09:37.167 "data_offset": 0, 00:09:37.167 "data_size": 65536 00:09:37.167 } 00:09:37.167 ] 00:09:37.167 } 00:09:37.167 } 00:09:37.167 }' 00:09:37.167 13:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.167 BaseBdev2 00:09:37.167 BaseBdev3' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.167 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.167 [2024-11-18 13:26:07.215468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.167 [2024-11-18 13:26:07.215571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.167 [2024-11-18 13:26:07.215648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.167 [2024-11-18 13:26:07.215700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.167 [2024-11-18 13:26:07.215713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65628 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65628 ']' 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65628 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65628 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.428 killing process with pid 65628 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65628' 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65628 00:09:37.428 [2024-11-18 13:26:07.265616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.428 13:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65628 00:09:37.690 [2024-11-18 13:26:07.566520] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.632 13:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.632 00:09:38.632 real 0m10.542s 00:09:38.632 user 0m16.758s 00:09:38.632 sys 0m1.918s 00:09:38.632 ************************************ 00:09:38.632 END TEST raid_state_function_test 00:09:38.632 ************************************ 00:09:38.632 13:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.632 13:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.893 13:26:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:38.893 13:26:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.893 13:26:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.893 13:26:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.893 ************************************ 00:09:38.893 START TEST raid_state_function_test_sb 00:09:38.893 ************************************ 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:38.893 Process raid pid: 66249 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66249 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66249' 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66249 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66249 ']' 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.893 13:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.893 [2024-11-18 13:26:08.860278] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:38.893 [2024-11-18 13:26:08.860544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.154 [2024-11-18 13:26:09.044508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.154 [2024-11-18 13:26:09.160388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.413 [2024-11-18 13:26:09.370142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.413 [2024-11-18 13:26:09.370266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.673 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.673 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:39.673 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.673 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.673 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.673 [2024-11-18 13:26:09.717317] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.673 [2024-11-18 13:26:09.717373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.673 [2024-11-18 13:26:09.717383] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.673 [2024-11-18 13:26:09.717392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.673 [2024-11-18 13:26:09.717398] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.673 [2024-11-18 13:26:09.717405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.933 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.934 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.934 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.934 "name": "Existed_Raid", 00:09:39.934 "uuid": "1af07c1f-9951-40aa-973b-b2c8e5ad6950", 00:09:39.934 "strip_size_kb": 64, 00:09:39.934 "state": "configuring", 00:09:39.934 "raid_level": "concat", 00:09:39.934 "superblock": true, 00:09:39.934 "num_base_bdevs": 3, 00:09:39.934 "num_base_bdevs_discovered": 0, 00:09:39.934 "num_base_bdevs_operational": 3, 00:09:39.934 "base_bdevs_list": [ 00:09:39.934 { 00:09:39.934 "name": "BaseBdev1", 00:09:39.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.934 "is_configured": false, 00:09:39.934 "data_offset": 0, 00:09:39.934 "data_size": 0 00:09:39.934 }, 00:09:39.934 { 00:09:39.934 "name": "BaseBdev2", 00:09:39.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.934 "is_configured": false, 00:09:39.934 "data_offset": 0, 00:09:39.934 "data_size": 0 00:09:39.934 }, 00:09:39.934 { 00:09:39.934 "name": "BaseBdev3", 00:09:39.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.934 "is_configured": false, 00:09:39.934 "data_offset": 0, 00:09:39.934 "data_size": 0 00:09:39.934 } 00:09:39.934 ] 00:09:39.934 }' 00:09:39.934 13:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.934 13:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 [2024-11-18 13:26:10.208499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.194 [2024-11-18 13:26:10.208633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.194 [2024-11-18 13:26:10.220435] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.194 [2024-11-18 13:26:10.220523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.194 [2024-11-18 13:26:10.220549] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.194 [2024-11-18 13:26:10.220570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.194 [2024-11-18 13:26:10.220587] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.194 [2024-11-18 13:26:10.220606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.194 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.454 [2024-11-18 13:26:10.265757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.454 BaseBdev1 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.454 [ 00:09:40.454 { 00:09:40.454 "name": "BaseBdev1", 00:09:40.454 "aliases": [ 00:09:40.454 "df5b7510-e484-456f-8695-0fa61c51fc93" 00:09:40.454 ], 00:09:40.454 "product_name": "Malloc disk", 00:09:40.454 "block_size": 512, 00:09:40.454 "num_blocks": 65536, 00:09:40.454 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:40.454 "assigned_rate_limits": { 00:09:40.454 "rw_ios_per_sec": 0, 00:09:40.454 "rw_mbytes_per_sec": 0, 00:09:40.454 "r_mbytes_per_sec": 0, 00:09:40.454 "w_mbytes_per_sec": 0 00:09:40.454 }, 00:09:40.454 "claimed": true, 00:09:40.454 "claim_type": "exclusive_write", 00:09:40.454 "zoned": false, 00:09:40.454 "supported_io_types": { 00:09:40.454 "read": true, 00:09:40.454 "write": true, 00:09:40.454 "unmap": true, 00:09:40.454 "flush": true, 00:09:40.454 "reset": true, 00:09:40.454 "nvme_admin": false, 00:09:40.454 "nvme_io": false, 00:09:40.454 "nvme_io_md": false, 00:09:40.454 "write_zeroes": true, 00:09:40.454 "zcopy": true, 00:09:40.454 "get_zone_info": false, 00:09:40.454 "zone_management": false, 00:09:40.454 "zone_append": false, 00:09:40.454 "compare": false, 00:09:40.454 "compare_and_write": false, 00:09:40.454 "abort": true, 00:09:40.454 "seek_hole": false, 00:09:40.454 "seek_data": false, 00:09:40.454 "copy": true, 00:09:40.454 "nvme_iov_md": false 00:09:40.454 }, 00:09:40.454 "memory_domains": [ 00:09:40.454 { 00:09:40.454 "dma_device_id": "system", 00:09:40.454 "dma_device_type": 1 00:09:40.454 }, 00:09:40.454 { 00:09:40.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.454 "dma_device_type": 2 00:09:40.454 } 00:09:40.454 ], 00:09:40.454 "driver_specific": {} 00:09:40.454 } 00:09:40.454 ] 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.454 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.455 "name": "Existed_Raid", 00:09:40.455 "uuid": "ea5a8998-d44a-49ab-9ec1-67ed05eb0b0d", 00:09:40.455 "strip_size_kb": 64, 00:09:40.455 "state": "configuring", 00:09:40.455 "raid_level": "concat", 00:09:40.455 "superblock": true, 00:09:40.455 "num_base_bdevs": 3, 00:09:40.455 "num_base_bdevs_discovered": 1, 00:09:40.455 "num_base_bdevs_operational": 3, 00:09:40.455 "base_bdevs_list": [ 00:09:40.455 { 00:09:40.455 "name": "BaseBdev1", 00:09:40.455 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:40.455 "is_configured": true, 00:09:40.455 "data_offset": 2048, 00:09:40.455 "data_size": 63488 00:09:40.455 }, 00:09:40.455 { 00:09:40.455 "name": "BaseBdev2", 00:09:40.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.455 "is_configured": false, 00:09:40.455 "data_offset": 0, 00:09:40.455 "data_size": 0 00:09:40.455 }, 00:09:40.455 { 00:09:40.455 "name": "BaseBdev3", 00:09:40.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.455 "is_configured": false, 00:09:40.455 "data_offset": 0, 00:09:40.455 "data_size": 0 00:09:40.455 } 00:09:40.455 ] 00:09:40.455 }' 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.455 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 [2024-11-18 13:26:10.701081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.715 [2024-11-18 13:26:10.701158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 [2024-11-18 13:26:10.713100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.715 [2024-11-18 13:26:10.715161] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.715 [2024-11-18 13:26:10.715205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.715 [2024-11-18 13:26:10.715217] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.715 [2024-11-18 13:26:10.715228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.715 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.991 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.991 "name": "Existed_Raid", 00:09:40.991 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:40.991 "strip_size_kb": 64, 00:09:40.991 "state": "configuring", 00:09:40.991 "raid_level": "concat", 00:09:40.991 "superblock": true, 00:09:40.991 "num_base_bdevs": 3, 00:09:40.991 "num_base_bdevs_discovered": 1, 00:09:40.991 "num_base_bdevs_operational": 3, 00:09:40.991 "base_bdevs_list": [ 00:09:40.991 { 00:09:40.991 "name": "BaseBdev1", 00:09:40.991 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:40.991 "is_configured": true, 00:09:40.991 "data_offset": 2048, 00:09:40.991 "data_size": 63488 00:09:40.991 }, 00:09:40.991 { 00:09:40.991 "name": "BaseBdev2", 00:09:40.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.991 "is_configured": false, 00:09:40.991 "data_offset": 0, 00:09:40.991 "data_size": 0 00:09:40.991 }, 00:09:40.991 { 00:09:40.991 "name": "BaseBdev3", 00:09:40.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.991 "is_configured": false, 00:09:40.991 "data_offset": 0, 00:09:40.991 "data_size": 0 00:09:40.992 } 00:09:40.992 ] 00:09:40.992 }' 00:09:40.992 13:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.992 13:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.250 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.250 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.250 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.250 [2024-11-18 13:26:11.233666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.250 BaseBdev2 00:09:41.250 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.250 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.251 [ 00:09:41.251 { 00:09:41.251 "name": "BaseBdev2", 00:09:41.251 "aliases": [ 00:09:41.251 "e78338c1-3565-40e4-a868-e594662cd95b" 00:09:41.251 ], 00:09:41.251 "product_name": "Malloc disk", 00:09:41.251 "block_size": 512, 00:09:41.251 "num_blocks": 65536, 00:09:41.251 "uuid": "e78338c1-3565-40e4-a868-e594662cd95b", 00:09:41.251 "assigned_rate_limits": { 00:09:41.251 "rw_ios_per_sec": 0, 00:09:41.251 "rw_mbytes_per_sec": 0, 00:09:41.251 "r_mbytes_per_sec": 0, 00:09:41.251 "w_mbytes_per_sec": 0 00:09:41.251 }, 00:09:41.251 "claimed": true, 00:09:41.251 "claim_type": "exclusive_write", 00:09:41.251 "zoned": false, 00:09:41.251 "supported_io_types": { 00:09:41.251 "read": true, 00:09:41.251 "write": true, 00:09:41.251 "unmap": true, 00:09:41.251 "flush": true, 00:09:41.251 "reset": true, 00:09:41.251 "nvme_admin": false, 00:09:41.251 "nvme_io": false, 00:09:41.251 "nvme_io_md": false, 00:09:41.251 "write_zeroes": true, 00:09:41.251 "zcopy": true, 00:09:41.251 "get_zone_info": false, 00:09:41.251 "zone_management": false, 00:09:41.251 "zone_append": false, 00:09:41.251 "compare": false, 00:09:41.251 "compare_and_write": false, 00:09:41.251 "abort": true, 00:09:41.251 "seek_hole": false, 00:09:41.251 "seek_data": false, 00:09:41.251 "copy": true, 00:09:41.251 "nvme_iov_md": false 00:09:41.251 }, 00:09:41.251 "memory_domains": [ 00:09:41.251 { 00:09:41.251 "dma_device_id": "system", 00:09:41.251 "dma_device_type": 1 00:09:41.251 }, 00:09:41.251 { 00:09:41.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.251 "dma_device_type": 2 00:09:41.251 } 00:09:41.251 ], 00:09:41.251 "driver_specific": {} 00:09:41.251 } 00:09:41.251 ] 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.251 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.509 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.509 "name": "Existed_Raid", 00:09:41.509 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:41.509 "strip_size_kb": 64, 00:09:41.509 "state": "configuring", 00:09:41.509 "raid_level": "concat", 00:09:41.509 "superblock": true, 00:09:41.509 "num_base_bdevs": 3, 00:09:41.509 "num_base_bdevs_discovered": 2, 00:09:41.509 "num_base_bdevs_operational": 3, 00:09:41.509 "base_bdevs_list": [ 00:09:41.509 { 00:09:41.509 "name": "BaseBdev1", 00:09:41.509 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:41.509 "is_configured": true, 00:09:41.509 "data_offset": 2048, 00:09:41.509 "data_size": 63488 00:09:41.509 }, 00:09:41.509 { 00:09:41.509 "name": "BaseBdev2", 00:09:41.509 "uuid": "e78338c1-3565-40e4-a868-e594662cd95b", 00:09:41.509 "is_configured": true, 00:09:41.509 "data_offset": 2048, 00:09:41.509 "data_size": 63488 00:09:41.509 }, 00:09:41.509 { 00:09:41.509 "name": "BaseBdev3", 00:09:41.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.509 "is_configured": false, 00:09:41.509 "data_offset": 0, 00:09:41.509 "data_size": 0 00:09:41.509 } 00:09:41.509 ] 00:09:41.509 }' 00:09:41.509 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.509 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 [2024-11-18 13:26:11.753464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.768 [2024-11-18 13:26:11.753718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.768 [2024-11-18 13:26:11.753741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.768 BaseBdev3 00:09:41.768 [2024-11-18 13:26:11.754187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.768 [2024-11-18 13:26:11.754376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.768 [2024-11-18 13:26:11.754388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:41.768 [2024-11-18 13:26:11.754526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 [ 00:09:41.768 { 00:09:41.768 "name": "BaseBdev3", 00:09:41.768 "aliases": [ 00:09:41.768 "7c619939-2bd9-49de-b382-b38d389ab5fc" 00:09:41.768 ], 00:09:41.768 "product_name": "Malloc disk", 00:09:41.768 "block_size": 512, 00:09:41.768 "num_blocks": 65536, 00:09:41.768 "uuid": "7c619939-2bd9-49de-b382-b38d389ab5fc", 00:09:41.768 "assigned_rate_limits": { 00:09:41.768 "rw_ios_per_sec": 0, 00:09:41.768 "rw_mbytes_per_sec": 0, 00:09:41.768 "r_mbytes_per_sec": 0, 00:09:41.768 "w_mbytes_per_sec": 0 00:09:41.768 }, 00:09:41.768 "claimed": true, 00:09:41.768 "claim_type": "exclusive_write", 00:09:41.768 "zoned": false, 00:09:41.768 "supported_io_types": { 00:09:41.768 "read": true, 00:09:41.768 "write": true, 00:09:41.768 "unmap": true, 00:09:41.768 "flush": true, 00:09:41.768 "reset": true, 00:09:41.768 "nvme_admin": false, 00:09:41.768 "nvme_io": false, 00:09:41.768 "nvme_io_md": false, 00:09:41.768 "write_zeroes": true, 00:09:41.768 "zcopy": true, 00:09:41.768 "get_zone_info": false, 00:09:41.768 "zone_management": false, 00:09:41.768 "zone_append": false, 00:09:41.768 "compare": false, 00:09:41.768 "compare_and_write": false, 00:09:41.768 "abort": true, 00:09:41.768 "seek_hole": false, 00:09:41.768 "seek_data": false, 00:09:41.768 "copy": true, 00:09:41.768 "nvme_iov_md": false 00:09:41.768 }, 00:09:41.768 "memory_domains": [ 00:09:41.768 { 00:09:41.768 "dma_device_id": "system", 00:09:41.768 "dma_device_type": 1 00:09:41.768 }, 00:09:41.768 { 00:09:41.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.768 "dma_device_type": 2 00:09:41.768 } 00:09:41.768 ], 00:09:41.768 "driver_specific": {} 00:09:41.768 } 00:09:41.768 ] 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.768 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.027 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.027 "name": "Existed_Raid", 00:09:42.027 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:42.027 "strip_size_kb": 64, 00:09:42.027 "state": "online", 00:09:42.027 "raid_level": "concat", 00:09:42.027 "superblock": true, 00:09:42.027 "num_base_bdevs": 3, 00:09:42.027 "num_base_bdevs_discovered": 3, 00:09:42.027 "num_base_bdevs_operational": 3, 00:09:42.027 "base_bdevs_list": [ 00:09:42.027 { 00:09:42.027 "name": "BaseBdev1", 00:09:42.027 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:42.027 "is_configured": true, 00:09:42.027 "data_offset": 2048, 00:09:42.027 "data_size": 63488 00:09:42.027 }, 00:09:42.027 { 00:09:42.027 "name": "BaseBdev2", 00:09:42.027 "uuid": "e78338c1-3565-40e4-a868-e594662cd95b", 00:09:42.027 "is_configured": true, 00:09:42.027 "data_offset": 2048, 00:09:42.027 "data_size": 63488 00:09:42.027 }, 00:09:42.027 { 00:09:42.027 "name": "BaseBdev3", 00:09:42.027 "uuid": "7c619939-2bd9-49de-b382-b38d389ab5fc", 00:09:42.027 "is_configured": true, 00:09:42.027 "data_offset": 2048, 00:09:42.027 "data_size": 63488 00:09:42.027 } 00:09:42.027 ] 00:09:42.027 }' 00:09:42.027 13:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.027 13:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.286 [2024-11-18 13:26:12.253012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.286 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.286 "name": "Existed_Raid", 00:09:42.286 "aliases": [ 00:09:42.286 "3f8f9a91-5378-4c21-bc99-adda1962c745" 00:09:42.286 ], 00:09:42.286 "product_name": "Raid Volume", 00:09:42.286 "block_size": 512, 00:09:42.286 "num_blocks": 190464, 00:09:42.286 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:42.286 "assigned_rate_limits": { 00:09:42.286 "rw_ios_per_sec": 0, 00:09:42.286 "rw_mbytes_per_sec": 0, 00:09:42.286 "r_mbytes_per_sec": 0, 00:09:42.286 "w_mbytes_per_sec": 0 00:09:42.286 }, 00:09:42.286 "claimed": false, 00:09:42.286 "zoned": false, 00:09:42.286 "supported_io_types": { 00:09:42.286 "read": true, 00:09:42.286 "write": true, 00:09:42.286 "unmap": true, 00:09:42.286 "flush": true, 00:09:42.286 "reset": true, 00:09:42.286 "nvme_admin": false, 00:09:42.286 "nvme_io": false, 00:09:42.286 "nvme_io_md": false, 00:09:42.286 "write_zeroes": true, 00:09:42.286 "zcopy": false, 00:09:42.286 "get_zone_info": false, 00:09:42.286 "zone_management": false, 00:09:42.286 "zone_append": false, 00:09:42.286 "compare": false, 00:09:42.286 "compare_and_write": false, 00:09:42.286 "abort": false, 00:09:42.287 "seek_hole": false, 00:09:42.287 "seek_data": false, 00:09:42.287 "copy": false, 00:09:42.287 "nvme_iov_md": false 00:09:42.287 }, 00:09:42.287 "memory_domains": [ 00:09:42.287 { 00:09:42.287 "dma_device_id": "system", 00:09:42.287 "dma_device_type": 1 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.287 "dma_device_type": 2 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "dma_device_id": "system", 00:09:42.287 "dma_device_type": 1 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.287 "dma_device_type": 2 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "dma_device_id": "system", 00:09:42.287 "dma_device_type": 1 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.287 "dma_device_type": 2 00:09:42.287 } 00:09:42.287 ], 00:09:42.287 "driver_specific": { 00:09:42.287 "raid": { 00:09:42.287 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:42.287 "strip_size_kb": 64, 00:09:42.287 "state": "online", 00:09:42.287 "raid_level": "concat", 00:09:42.287 "superblock": true, 00:09:42.287 "num_base_bdevs": 3, 00:09:42.287 "num_base_bdevs_discovered": 3, 00:09:42.287 "num_base_bdevs_operational": 3, 00:09:42.287 "base_bdevs_list": [ 00:09:42.287 { 00:09:42.287 "name": "BaseBdev1", 00:09:42.287 "uuid": "df5b7510-e484-456f-8695-0fa61c51fc93", 00:09:42.287 "is_configured": true, 00:09:42.287 "data_offset": 2048, 00:09:42.287 "data_size": 63488 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "name": "BaseBdev2", 00:09:42.287 "uuid": "e78338c1-3565-40e4-a868-e594662cd95b", 00:09:42.287 "is_configured": true, 00:09:42.287 "data_offset": 2048, 00:09:42.287 "data_size": 63488 00:09:42.287 }, 00:09:42.287 { 00:09:42.287 "name": "BaseBdev3", 00:09:42.287 "uuid": "7c619939-2bd9-49de-b382-b38d389ab5fc", 00:09:42.287 "is_configured": true, 00:09:42.287 "data_offset": 2048, 00:09:42.287 "data_size": 63488 00:09:42.287 } 00:09:42.287 ] 00:09:42.287 } 00:09:42.287 } 00:09:42.287 }' 00:09:42.287 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:42.547 BaseBdev2 00:09:42.547 BaseBdev3' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.547 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.547 [2024-11-18 13:26:12.548304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.547 [2024-11-18 13:26:12.548337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.547 [2024-11-18 13:26:12.548390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:42.806 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.807 "name": "Existed_Raid", 00:09:42.807 "uuid": "3f8f9a91-5378-4c21-bc99-adda1962c745", 00:09:42.807 "strip_size_kb": 64, 00:09:42.807 "state": "offline", 00:09:42.807 "raid_level": "concat", 00:09:42.807 "superblock": true, 00:09:42.807 "num_base_bdevs": 3, 00:09:42.807 "num_base_bdevs_discovered": 2, 00:09:42.807 "num_base_bdevs_operational": 2, 00:09:42.807 "base_bdevs_list": [ 00:09:42.807 { 00:09:42.807 "name": null, 00:09:42.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.807 "is_configured": false, 00:09:42.807 "data_offset": 0, 00:09:42.807 "data_size": 63488 00:09:42.807 }, 00:09:42.807 { 00:09:42.807 "name": "BaseBdev2", 00:09:42.807 "uuid": "e78338c1-3565-40e4-a868-e594662cd95b", 00:09:42.807 "is_configured": true, 00:09:42.807 "data_offset": 2048, 00:09:42.807 "data_size": 63488 00:09:42.807 }, 00:09:42.807 { 00:09:42.807 "name": "BaseBdev3", 00:09:42.807 "uuid": "7c619939-2bd9-49de-b382-b38d389ab5fc", 00:09:42.807 "is_configured": true, 00:09:42.807 "data_offset": 2048, 00:09:42.807 "data_size": 63488 00:09:42.807 } 00:09:42.807 ] 00:09:42.807 }' 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.807 13:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 [2024-11-18 13:26:13.178358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.373 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 [2024-11-18 13:26:13.332317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.373 [2024-11-18 13:26:13.332457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 BaseBdev2 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 [ 00:09:43.633 { 00:09:43.633 "name": "BaseBdev2", 00:09:43.633 "aliases": [ 00:09:43.633 "89278c86-9dc8-4c77-9b73-f1c7126a7de9" 00:09:43.633 ], 00:09:43.633 "product_name": "Malloc disk", 00:09:43.633 "block_size": 512, 00:09:43.633 "num_blocks": 65536, 00:09:43.633 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:43.633 "assigned_rate_limits": { 00:09:43.633 "rw_ios_per_sec": 0, 00:09:43.633 "rw_mbytes_per_sec": 0, 00:09:43.633 "r_mbytes_per_sec": 0, 00:09:43.633 "w_mbytes_per_sec": 0 00:09:43.633 }, 00:09:43.633 "claimed": false, 00:09:43.634 "zoned": false, 00:09:43.634 "supported_io_types": { 00:09:43.634 "read": true, 00:09:43.634 "write": true, 00:09:43.634 "unmap": true, 00:09:43.634 "flush": true, 00:09:43.634 "reset": true, 00:09:43.634 "nvme_admin": false, 00:09:43.634 "nvme_io": false, 00:09:43.634 "nvme_io_md": false, 00:09:43.634 "write_zeroes": true, 00:09:43.634 "zcopy": true, 00:09:43.634 "get_zone_info": false, 00:09:43.634 "zone_management": false, 00:09:43.634 "zone_append": false, 00:09:43.634 "compare": false, 00:09:43.634 "compare_and_write": false, 00:09:43.634 "abort": true, 00:09:43.634 "seek_hole": false, 00:09:43.634 "seek_data": false, 00:09:43.634 "copy": true, 00:09:43.634 "nvme_iov_md": false 00:09:43.634 }, 00:09:43.634 "memory_domains": [ 00:09:43.634 { 00:09:43.634 "dma_device_id": "system", 00:09:43.634 "dma_device_type": 1 00:09:43.634 }, 00:09:43.634 { 00:09:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.634 "dma_device_type": 2 00:09:43.634 } 00:09:43.634 ], 00:09:43.634 "driver_specific": {} 00:09:43.634 } 00:09:43.634 ] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 BaseBdev3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 [ 00:09:43.634 { 00:09:43.634 "name": "BaseBdev3", 00:09:43.634 "aliases": [ 00:09:43.634 "4d9196a6-9422-48e5-8f5a-e18adea9c1a8" 00:09:43.634 ], 00:09:43.634 "product_name": "Malloc disk", 00:09:43.634 "block_size": 512, 00:09:43.634 "num_blocks": 65536, 00:09:43.634 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:43.634 "assigned_rate_limits": { 00:09:43.634 "rw_ios_per_sec": 0, 00:09:43.634 "rw_mbytes_per_sec": 0, 00:09:43.634 "r_mbytes_per_sec": 0, 00:09:43.634 "w_mbytes_per_sec": 0 00:09:43.634 }, 00:09:43.634 "claimed": false, 00:09:43.634 "zoned": false, 00:09:43.634 "supported_io_types": { 00:09:43.634 "read": true, 00:09:43.634 "write": true, 00:09:43.634 "unmap": true, 00:09:43.634 "flush": true, 00:09:43.634 "reset": true, 00:09:43.634 "nvme_admin": false, 00:09:43.634 "nvme_io": false, 00:09:43.634 "nvme_io_md": false, 00:09:43.634 "write_zeroes": true, 00:09:43.634 "zcopy": true, 00:09:43.634 "get_zone_info": false, 00:09:43.634 "zone_management": false, 00:09:43.634 "zone_append": false, 00:09:43.634 "compare": false, 00:09:43.634 "compare_and_write": false, 00:09:43.634 "abort": true, 00:09:43.634 "seek_hole": false, 00:09:43.634 "seek_data": false, 00:09:43.634 "copy": true, 00:09:43.634 "nvme_iov_md": false 00:09:43.634 }, 00:09:43.634 "memory_domains": [ 00:09:43.634 { 00:09:43.634 "dma_device_id": "system", 00:09:43.634 "dma_device_type": 1 00:09:43.634 }, 00:09:43.634 { 00:09:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.634 "dma_device_type": 2 00:09:43.634 } 00:09:43.634 ], 00:09:43.634 "driver_specific": {} 00:09:43.634 } 00:09:43.634 ] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 [2024-11-18 13:26:13.642283] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.634 [2024-11-18 13:26:13.642455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.634 [2024-11-18 13:26:13.642498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.634 [2024-11-18 13:26:13.644250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.634 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.893 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.893 "name": "Existed_Raid", 00:09:43.893 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:43.893 "strip_size_kb": 64, 00:09:43.893 "state": "configuring", 00:09:43.893 "raid_level": "concat", 00:09:43.893 "superblock": true, 00:09:43.893 "num_base_bdevs": 3, 00:09:43.893 "num_base_bdevs_discovered": 2, 00:09:43.893 "num_base_bdevs_operational": 3, 00:09:43.893 "base_bdevs_list": [ 00:09:43.893 { 00:09:43.893 "name": "BaseBdev1", 00:09:43.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.893 "is_configured": false, 00:09:43.893 "data_offset": 0, 00:09:43.893 "data_size": 0 00:09:43.893 }, 00:09:43.893 { 00:09:43.893 "name": "BaseBdev2", 00:09:43.893 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:43.893 "is_configured": true, 00:09:43.893 "data_offset": 2048, 00:09:43.893 "data_size": 63488 00:09:43.893 }, 00:09:43.893 { 00:09:43.893 "name": "BaseBdev3", 00:09:43.893 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:43.893 "is_configured": true, 00:09:43.893 "data_offset": 2048, 00:09:43.893 "data_size": 63488 00:09:43.893 } 00:09:43.893 ] 00:09:43.893 }' 00:09:43.893 13:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.893 13:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 [2024-11-18 13:26:14.109468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.152 "name": "Existed_Raid", 00:09:44.152 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:44.152 "strip_size_kb": 64, 00:09:44.152 "state": "configuring", 00:09:44.152 "raid_level": "concat", 00:09:44.152 "superblock": true, 00:09:44.152 "num_base_bdevs": 3, 00:09:44.152 "num_base_bdevs_discovered": 1, 00:09:44.152 "num_base_bdevs_operational": 3, 00:09:44.152 "base_bdevs_list": [ 00:09:44.152 { 00:09:44.152 "name": "BaseBdev1", 00:09:44.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.152 "is_configured": false, 00:09:44.152 "data_offset": 0, 00:09:44.152 "data_size": 0 00:09:44.152 }, 00:09:44.152 { 00:09:44.152 "name": null, 00:09:44.152 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:44.152 "is_configured": false, 00:09:44.152 "data_offset": 0, 00:09:44.152 "data_size": 63488 00:09:44.152 }, 00:09:44.152 { 00:09:44.152 "name": "BaseBdev3", 00:09:44.152 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:44.152 "is_configured": true, 00:09:44.152 "data_offset": 2048, 00:09:44.152 "data_size": 63488 00:09:44.152 } 00:09:44.152 ] 00:09:44.152 }' 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.152 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 [2024-11-18 13:26:14.689044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.719 BaseBdev1 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.719 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.719 [ 00:09:44.719 { 00:09:44.719 "name": "BaseBdev1", 00:09:44.719 "aliases": [ 00:09:44.719 "21d76b10-0a8d-433a-a646-ed5bc45ffad8" 00:09:44.719 ], 00:09:44.719 "product_name": "Malloc disk", 00:09:44.719 "block_size": 512, 00:09:44.719 "num_blocks": 65536, 00:09:44.719 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:44.719 "assigned_rate_limits": { 00:09:44.719 "rw_ios_per_sec": 0, 00:09:44.719 "rw_mbytes_per_sec": 0, 00:09:44.719 "r_mbytes_per_sec": 0, 00:09:44.719 "w_mbytes_per_sec": 0 00:09:44.719 }, 00:09:44.719 "claimed": true, 00:09:44.719 "claim_type": "exclusive_write", 00:09:44.719 "zoned": false, 00:09:44.719 "supported_io_types": { 00:09:44.719 "read": true, 00:09:44.719 "write": true, 00:09:44.719 "unmap": true, 00:09:44.719 "flush": true, 00:09:44.719 "reset": true, 00:09:44.719 "nvme_admin": false, 00:09:44.719 "nvme_io": false, 00:09:44.719 "nvme_io_md": false, 00:09:44.719 "write_zeroes": true, 00:09:44.719 "zcopy": true, 00:09:44.719 "get_zone_info": false, 00:09:44.719 "zone_management": false, 00:09:44.719 "zone_append": false, 00:09:44.719 "compare": false, 00:09:44.719 "compare_and_write": false, 00:09:44.719 "abort": true, 00:09:44.719 "seek_hole": false, 00:09:44.719 "seek_data": false, 00:09:44.719 "copy": true, 00:09:44.719 "nvme_iov_md": false 00:09:44.719 }, 00:09:44.719 "memory_domains": [ 00:09:44.719 { 00:09:44.719 "dma_device_id": "system", 00:09:44.719 "dma_device_type": 1 00:09:44.719 }, 00:09:44.719 { 00:09:44.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.719 "dma_device_type": 2 00:09:44.719 } 00:09:44.719 ], 00:09:44.720 "driver_specific": {} 00:09:44.720 } 00:09:44.720 ] 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.720 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.979 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.979 "name": "Existed_Raid", 00:09:44.979 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:44.979 "strip_size_kb": 64, 00:09:44.979 "state": "configuring", 00:09:44.979 "raid_level": "concat", 00:09:44.979 "superblock": true, 00:09:44.979 "num_base_bdevs": 3, 00:09:44.979 "num_base_bdevs_discovered": 2, 00:09:44.979 "num_base_bdevs_operational": 3, 00:09:44.979 "base_bdevs_list": [ 00:09:44.979 { 00:09:44.979 "name": "BaseBdev1", 00:09:44.979 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:44.979 "is_configured": true, 00:09:44.979 "data_offset": 2048, 00:09:44.979 "data_size": 63488 00:09:44.979 }, 00:09:44.979 { 00:09:44.979 "name": null, 00:09:44.979 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:44.979 "is_configured": false, 00:09:44.979 "data_offset": 0, 00:09:44.979 "data_size": 63488 00:09:44.979 }, 00:09:44.979 { 00:09:44.979 "name": "BaseBdev3", 00:09:44.979 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:44.979 "is_configured": true, 00:09:44.979 "data_offset": 2048, 00:09:44.979 "data_size": 63488 00:09:44.979 } 00:09:44.979 ] 00:09:44.979 }' 00:09:44.979 13:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.979 13:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.237 [2024-11-18 13:26:15.208213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.237 "name": "Existed_Raid", 00:09:45.237 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:45.237 "strip_size_kb": 64, 00:09:45.237 "state": "configuring", 00:09:45.237 "raid_level": "concat", 00:09:45.237 "superblock": true, 00:09:45.237 "num_base_bdevs": 3, 00:09:45.237 "num_base_bdevs_discovered": 1, 00:09:45.237 "num_base_bdevs_operational": 3, 00:09:45.237 "base_bdevs_list": [ 00:09:45.237 { 00:09:45.237 "name": "BaseBdev1", 00:09:45.237 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:45.237 "is_configured": true, 00:09:45.237 "data_offset": 2048, 00:09:45.237 "data_size": 63488 00:09:45.237 }, 00:09:45.237 { 00:09:45.237 "name": null, 00:09:45.237 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:45.237 "is_configured": false, 00:09:45.237 "data_offset": 0, 00:09:45.237 "data_size": 63488 00:09:45.237 }, 00:09:45.237 { 00:09:45.237 "name": null, 00:09:45.237 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:45.237 "is_configured": false, 00:09:45.237 "data_offset": 0, 00:09:45.237 "data_size": 63488 00:09:45.237 } 00:09:45.237 ] 00:09:45.237 }' 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.237 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.804 [2024-11-18 13:26:15.699404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.804 "name": "Existed_Raid", 00:09:45.804 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:45.804 "strip_size_kb": 64, 00:09:45.804 "state": "configuring", 00:09:45.804 "raid_level": "concat", 00:09:45.804 "superblock": true, 00:09:45.804 "num_base_bdevs": 3, 00:09:45.804 "num_base_bdevs_discovered": 2, 00:09:45.804 "num_base_bdevs_operational": 3, 00:09:45.804 "base_bdevs_list": [ 00:09:45.804 { 00:09:45.804 "name": "BaseBdev1", 00:09:45.804 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:45.804 "is_configured": true, 00:09:45.804 "data_offset": 2048, 00:09:45.804 "data_size": 63488 00:09:45.804 }, 00:09:45.804 { 00:09:45.804 "name": null, 00:09:45.804 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:45.804 "is_configured": false, 00:09:45.804 "data_offset": 0, 00:09:45.804 "data_size": 63488 00:09:45.804 }, 00:09:45.804 { 00:09:45.804 "name": "BaseBdev3", 00:09:45.804 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:45.804 "is_configured": true, 00:09:45.804 "data_offset": 2048, 00:09:45.804 "data_size": 63488 00:09:45.804 } 00:09:45.804 ] 00:09:45.804 }' 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.804 13:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 [2024-11-18 13:26:16.158635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.370 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.370 "name": "Existed_Raid", 00:09:46.370 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:46.370 "strip_size_kb": 64, 00:09:46.370 "state": "configuring", 00:09:46.370 "raid_level": "concat", 00:09:46.370 "superblock": true, 00:09:46.370 "num_base_bdevs": 3, 00:09:46.370 "num_base_bdevs_discovered": 1, 00:09:46.370 "num_base_bdevs_operational": 3, 00:09:46.370 "base_bdevs_list": [ 00:09:46.370 { 00:09:46.370 "name": null, 00:09:46.370 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:46.370 "is_configured": false, 00:09:46.370 "data_offset": 0, 00:09:46.370 "data_size": 63488 00:09:46.370 }, 00:09:46.370 { 00:09:46.370 "name": null, 00:09:46.370 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:46.370 "is_configured": false, 00:09:46.370 "data_offset": 0, 00:09:46.371 "data_size": 63488 00:09:46.371 }, 00:09:46.371 { 00:09:46.371 "name": "BaseBdev3", 00:09:46.371 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:46.371 "is_configured": true, 00:09:46.371 "data_offset": 2048, 00:09:46.371 "data_size": 63488 00:09:46.371 } 00:09:46.371 ] 00:09:46.371 }' 00:09:46.371 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.371 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.938 [2024-11-18 13:26:16.733765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.938 "name": "Existed_Raid", 00:09:46.938 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:46.938 "strip_size_kb": 64, 00:09:46.938 "state": "configuring", 00:09:46.938 "raid_level": "concat", 00:09:46.938 "superblock": true, 00:09:46.938 "num_base_bdevs": 3, 00:09:46.938 "num_base_bdevs_discovered": 2, 00:09:46.938 "num_base_bdevs_operational": 3, 00:09:46.938 "base_bdevs_list": [ 00:09:46.938 { 00:09:46.938 "name": null, 00:09:46.938 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:46.938 "is_configured": false, 00:09:46.938 "data_offset": 0, 00:09:46.938 "data_size": 63488 00:09:46.938 }, 00:09:46.938 { 00:09:46.938 "name": "BaseBdev2", 00:09:46.938 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:46.938 "is_configured": true, 00:09:46.938 "data_offset": 2048, 00:09:46.938 "data_size": 63488 00:09:46.938 }, 00:09:46.938 { 00:09:46.938 "name": "BaseBdev3", 00:09:46.938 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:46.938 "is_configured": true, 00:09:46.938 "data_offset": 2048, 00:09:46.938 "data_size": 63488 00:09:46.938 } 00:09:46.938 ] 00:09:46.938 }' 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.938 13:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.196 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.196 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:47.196 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.196 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.196 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21d76b10-0a8d-433a-a646-ed5bc45ffad8 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 [2024-11-18 13:26:17.340719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:47.455 [2024-11-18 13:26:17.340937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.455 [2024-11-18 13:26:17.340954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.455 [2024-11-18 13:26:17.341243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:47.455 [2024-11-18 13:26:17.341383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.455 [2024-11-18 13:26:17.341393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:47.455 NewBaseBdev 00:09:47.455 [2024-11-18 13:26:17.341513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 [ 00:09:47.455 { 00:09:47.455 "name": "NewBaseBdev", 00:09:47.455 "aliases": [ 00:09:47.455 "21d76b10-0a8d-433a-a646-ed5bc45ffad8" 00:09:47.455 ], 00:09:47.455 "product_name": "Malloc disk", 00:09:47.455 "block_size": 512, 00:09:47.455 "num_blocks": 65536, 00:09:47.455 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:47.455 "assigned_rate_limits": { 00:09:47.455 "rw_ios_per_sec": 0, 00:09:47.455 "rw_mbytes_per_sec": 0, 00:09:47.455 "r_mbytes_per_sec": 0, 00:09:47.455 "w_mbytes_per_sec": 0 00:09:47.455 }, 00:09:47.455 "claimed": true, 00:09:47.455 "claim_type": "exclusive_write", 00:09:47.455 "zoned": false, 00:09:47.455 "supported_io_types": { 00:09:47.455 "read": true, 00:09:47.455 "write": true, 00:09:47.455 "unmap": true, 00:09:47.455 "flush": true, 00:09:47.455 "reset": true, 00:09:47.455 "nvme_admin": false, 00:09:47.455 "nvme_io": false, 00:09:47.455 "nvme_io_md": false, 00:09:47.455 "write_zeroes": true, 00:09:47.455 "zcopy": true, 00:09:47.455 "get_zone_info": false, 00:09:47.455 "zone_management": false, 00:09:47.455 "zone_append": false, 00:09:47.455 "compare": false, 00:09:47.455 "compare_and_write": false, 00:09:47.455 "abort": true, 00:09:47.455 "seek_hole": false, 00:09:47.455 "seek_data": false, 00:09:47.455 "copy": true, 00:09:47.455 "nvme_iov_md": false 00:09:47.455 }, 00:09:47.455 "memory_domains": [ 00:09:47.455 { 00:09:47.455 "dma_device_id": "system", 00:09:47.455 "dma_device_type": 1 00:09:47.455 }, 00:09:47.455 { 00:09:47.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.455 "dma_device_type": 2 00:09:47.455 } 00:09:47.455 ], 00:09:47.455 "driver_specific": {} 00:09:47.455 } 00:09:47.455 ] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.455 "name": "Existed_Raid", 00:09:47.455 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:47.455 "strip_size_kb": 64, 00:09:47.455 "state": "online", 00:09:47.455 "raid_level": "concat", 00:09:47.455 "superblock": true, 00:09:47.455 "num_base_bdevs": 3, 00:09:47.455 "num_base_bdevs_discovered": 3, 00:09:47.455 "num_base_bdevs_operational": 3, 00:09:47.455 "base_bdevs_list": [ 00:09:47.455 { 00:09:47.455 "name": "NewBaseBdev", 00:09:47.455 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:47.455 "is_configured": true, 00:09:47.455 "data_offset": 2048, 00:09:47.455 "data_size": 63488 00:09:47.455 }, 00:09:47.455 { 00:09:47.455 "name": "BaseBdev2", 00:09:47.455 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:47.455 "is_configured": true, 00:09:47.455 "data_offset": 2048, 00:09:47.455 "data_size": 63488 00:09:47.455 }, 00:09:47.455 { 00:09:47.455 "name": "BaseBdev3", 00:09:47.455 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:47.455 "is_configured": true, 00:09:47.455 "data_offset": 2048, 00:09:47.455 "data_size": 63488 00:09:47.455 } 00:09:47.455 ] 00:09:47.455 }' 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.455 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.023 [2024-11-18 13:26:17.792258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.023 "name": "Existed_Raid", 00:09:48.023 "aliases": [ 00:09:48.023 "928d0fce-558a-4217-918f-de404dc60bfc" 00:09:48.023 ], 00:09:48.023 "product_name": "Raid Volume", 00:09:48.023 "block_size": 512, 00:09:48.023 "num_blocks": 190464, 00:09:48.023 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:48.023 "assigned_rate_limits": { 00:09:48.023 "rw_ios_per_sec": 0, 00:09:48.023 "rw_mbytes_per_sec": 0, 00:09:48.023 "r_mbytes_per_sec": 0, 00:09:48.023 "w_mbytes_per_sec": 0 00:09:48.023 }, 00:09:48.023 "claimed": false, 00:09:48.023 "zoned": false, 00:09:48.023 "supported_io_types": { 00:09:48.023 "read": true, 00:09:48.023 "write": true, 00:09:48.023 "unmap": true, 00:09:48.023 "flush": true, 00:09:48.023 "reset": true, 00:09:48.023 "nvme_admin": false, 00:09:48.023 "nvme_io": false, 00:09:48.023 "nvme_io_md": false, 00:09:48.023 "write_zeroes": true, 00:09:48.023 "zcopy": false, 00:09:48.023 "get_zone_info": false, 00:09:48.023 "zone_management": false, 00:09:48.023 "zone_append": false, 00:09:48.023 "compare": false, 00:09:48.023 "compare_and_write": false, 00:09:48.023 "abort": false, 00:09:48.023 "seek_hole": false, 00:09:48.023 "seek_data": false, 00:09:48.023 "copy": false, 00:09:48.023 "nvme_iov_md": false 00:09:48.023 }, 00:09:48.023 "memory_domains": [ 00:09:48.023 { 00:09:48.023 "dma_device_id": "system", 00:09:48.023 "dma_device_type": 1 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.023 "dma_device_type": 2 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "dma_device_id": "system", 00:09:48.023 "dma_device_type": 1 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.023 "dma_device_type": 2 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "dma_device_id": "system", 00:09:48.023 "dma_device_type": 1 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.023 "dma_device_type": 2 00:09:48.023 } 00:09:48.023 ], 00:09:48.023 "driver_specific": { 00:09:48.023 "raid": { 00:09:48.023 "uuid": "928d0fce-558a-4217-918f-de404dc60bfc", 00:09:48.023 "strip_size_kb": 64, 00:09:48.023 "state": "online", 00:09:48.023 "raid_level": "concat", 00:09:48.023 "superblock": true, 00:09:48.023 "num_base_bdevs": 3, 00:09:48.023 "num_base_bdevs_discovered": 3, 00:09:48.023 "num_base_bdevs_operational": 3, 00:09:48.023 "base_bdevs_list": [ 00:09:48.023 { 00:09:48.023 "name": "NewBaseBdev", 00:09:48.023 "uuid": "21d76b10-0a8d-433a-a646-ed5bc45ffad8", 00:09:48.023 "is_configured": true, 00:09:48.023 "data_offset": 2048, 00:09:48.023 "data_size": 63488 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "name": "BaseBdev2", 00:09:48.023 "uuid": "89278c86-9dc8-4c77-9b73-f1c7126a7de9", 00:09:48.023 "is_configured": true, 00:09:48.023 "data_offset": 2048, 00:09:48.023 "data_size": 63488 00:09:48.023 }, 00:09:48.023 { 00:09:48.023 "name": "BaseBdev3", 00:09:48.023 "uuid": "4d9196a6-9422-48e5-8f5a-e18adea9c1a8", 00:09:48.023 "is_configured": true, 00:09:48.023 "data_offset": 2048, 00:09:48.023 "data_size": 63488 00:09:48.023 } 00:09:48.023 ] 00:09:48.023 } 00:09:48.023 } 00:09:48.023 }' 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.023 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:48.023 BaseBdev2 00:09:48.023 BaseBdev3' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.024 13:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.024 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.283 [2024-11-18 13:26:18.075501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.283 [2024-11-18 13:26:18.075533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.283 [2024-11-18 13:26:18.075617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.283 [2024-11-18 13:26:18.075675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.283 [2024-11-18 13:26:18.075688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66249 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66249 ']' 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66249 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66249 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66249' 00:09:48.283 killing process with pid 66249 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66249 00:09:48.283 [2024-11-18 13:26:18.123349] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.283 13:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66249 00:09:48.541 [2024-11-18 13:26:18.419541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.477 13:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.477 00:09:49.477 real 0m10.771s 00:09:49.477 user 0m17.196s 00:09:49.477 sys 0m1.938s 00:09:49.477 ************************************ 00:09:49.477 END TEST raid_state_function_test_sb 00:09:49.477 ************************************ 00:09:49.477 13:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.477 13:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.735 13:26:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:49.735 13:26:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:49.735 13:26:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.735 13:26:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.735 ************************************ 00:09:49.735 START TEST raid_superblock_test 00:09:49.735 ************************************ 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66876 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66876 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66876 ']' 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.735 13:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.735 [2024-11-18 13:26:19.697122] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:49.735 [2024-11-18 13:26:19.697284] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66876 ] 00:09:49.994 [2024-11-18 13:26:19.875914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.994 [2024-11-18 13:26:19.988007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.252 [2024-11-18 13:26:20.186216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.252 [2024-11-18 13:26:20.186267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.511 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 malloc1 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 [2024-11-18 13:26:20.580519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.771 [2024-11-18 13:26:20.580669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.771 [2024-11-18 13:26:20.580712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:50.771 [2024-11-18 13:26:20.580741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.771 [2024-11-18 13:26:20.582787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.771 [2024-11-18 13:26:20.582862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.771 pt1 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 malloc2 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 [2024-11-18 13:26:20.640532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.771 [2024-11-18 13:26:20.640663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.771 [2024-11-18 13:26:20.640703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:50.771 [2024-11-18 13:26:20.640731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.771 [2024-11-18 13:26:20.642788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.771 [2024-11-18 13:26:20.642859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.771 pt2 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 malloc3 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 [2024-11-18 13:26:20.707495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.771 [2024-11-18 13:26:20.707612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.771 [2024-11-18 13:26:20.707650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:50.771 [2024-11-18 13:26:20.707679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.771 [2024-11-18 13:26:20.709680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.771 [2024-11-18 13:26:20.709751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.771 pt3 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.771 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.771 [2024-11-18 13:26:20.719522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.771 [2024-11-18 13:26:20.721289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.771 [2024-11-18 13:26:20.721351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.771 [2024-11-18 13:26:20.721501] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:50.772 [2024-11-18 13:26:20.721514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.772 [2024-11-18 13:26:20.721738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.772 [2024-11-18 13:26:20.721890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:50.772 [2024-11-18 13:26:20.721899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:50.772 [2024-11-18 13:26:20.722040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.772 "name": "raid_bdev1", 00:09:50.772 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:50.772 "strip_size_kb": 64, 00:09:50.772 "state": "online", 00:09:50.772 "raid_level": "concat", 00:09:50.772 "superblock": true, 00:09:50.772 "num_base_bdevs": 3, 00:09:50.772 "num_base_bdevs_discovered": 3, 00:09:50.772 "num_base_bdevs_operational": 3, 00:09:50.772 "base_bdevs_list": [ 00:09:50.772 { 00:09:50.772 "name": "pt1", 00:09:50.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.772 "is_configured": true, 00:09:50.772 "data_offset": 2048, 00:09:50.772 "data_size": 63488 00:09:50.772 }, 00:09:50.772 { 00:09:50.772 "name": "pt2", 00:09:50.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.772 "is_configured": true, 00:09:50.772 "data_offset": 2048, 00:09:50.772 "data_size": 63488 00:09:50.772 }, 00:09:50.772 { 00:09:50.772 "name": "pt3", 00:09:50.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.772 "is_configured": true, 00:09:50.772 "data_offset": 2048, 00:09:50.772 "data_size": 63488 00:09:50.772 } 00:09:50.772 ] 00:09:50.772 }' 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.772 13:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.341 [2024-11-18 13:26:21.202997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.341 "name": "raid_bdev1", 00:09:51.341 "aliases": [ 00:09:51.341 "04b98d20-f803-4c2f-a207-a962c6fc3d94" 00:09:51.341 ], 00:09:51.341 "product_name": "Raid Volume", 00:09:51.341 "block_size": 512, 00:09:51.341 "num_blocks": 190464, 00:09:51.341 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:51.341 "assigned_rate_limits": { 00:09:51.341 "rw_ios_per_sec": 0, 00:09:51.341 "rw_mbytes_per_sec": 0, 00:09:51.341 "r_mbytes_per_sec": 0, 00:09:51.341 "w_mbytes_per_sec": 0 00:09:51.341 }, 00:09:51.341 "claimed": false, 00:09:51.341 "zoned": false, 00:09:51.341 "supported_io_types": { 00:09:51.341 "read": true, 00:09:51.341 "write": true, 00:09:51.341 "unmap": true, 00:09:51.341 "flush": true, 00:09:51.341 "reset": true, 00:09:51.341 "nvme_admin": false, 00:09:51.341 "nvme_io": false, 00:09:51.341 "nvme_io_md": false, 00:09:51.341 "write_zeroes": true, 00:09:51.341 "zcopy": false, 00:09:51.341 "get_zone_info": false, 00:09:51.341 "zone_management": false, 00:09:51.341 "zone_append": false, 00:09:51.341 "compare": false, 00:09:51.341 "compare_and_write": false, 00:09:51.341 "abort": false, 00:09:51.341 "seek_hole": false, 00:09:51.341 "seek_data": false, 00:09:51.341 "copy": false, 00:09:51.341 "nvme_iov_md": false 00:09:51.341 }, 00:09:51.341 "memory_domains": [ 00:09:51.341 { 00:09:51.341 "dma_device_id": "system", 00:09:51.341 "dma_device_type": 1 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.341 "dma_device_type": 2 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "dma_device_id": "system", 00:09:51.341 "dma_device_type": 1 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.341 "dma_device_type": 2 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "dma_device_id": "system", 00:09:51.341 "dma_device_type": 1 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.341 "dma_device_type": 2 00:09:51.341 } 00:09:51.341 ], 00:09:51.341 "driver_specific": { 00:09:51.341 "raid": { 00:09:51.341 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:51.341 "strip_size_kb": 64, 00:09:51.341 "state": "online", 00:09:51.341 "raid_level": "concat", 00:09:51.341 "superblock": true, 00:09:51.341 "num_base_bdevs": 3, 00:09:51.341 "num_base_bdevs_discovered": 3, 00:09:51.341 "num_base_bdevs_operational": 3, 00:09:51.341 "base_bdevs_list": [ 00:09:51.341 { 00:09:51.341 "name": "pt1", 00:09:51.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.341 "is_configured": true, 00:09:51.341 "data_offset": 2048, 00:09:51.341 "data_size": 63488 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "name": "pt2", 00:09:51.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.341 "is_configured": true, 00:09:51.341 "data_offset": 2048, 00:09:51.341 "data_size": 63488 00:09:51.341 }, 00:09:51.341 { 00:09:51.341 "name": "pt3", 00:09:51.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.341 "is_configured": true, 00:09:51.341 "data_offset": 2048, 00:09:51.341 "data_size": 63488 00:09:51.341 } 00:09:51.341 ] 00:09:51.341 } 00:09:51.341 } 00:09:51.341 }' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.341 pt2 00:09:51.341 pt3' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.341 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.342 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.622 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.622 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.622 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.622 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 [2024-11-18 13:26:21.478535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=04b98d20-f803-4c2f-a207-a962c6fc3d94 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 04b98d20-f803-4c2f-a207-a962c6fc3d94 ']' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 [2024-11-18 13:26:21.526165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.623 [2024-11-18 13:26:21.526238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.623 [2024-11-18 13:26:21.526333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.623 [2024-11-18 13:26:21.526418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.623 [2024-11-18 13:26:21.526452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.892 [2024-11-18 13:26:21.681941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:51.892 [2024-11-18 13:26:21.683826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:51.892 [2024-11-18 13:26:21.683919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:51.892 [2024-11-18 13:26:21.683996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:51.892 [2024-11-18 13:26:21.684088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:51.892 [2024-11-18 13:26:21.684158] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:51.892 [2024-11-18 13:26:21.684224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.892 [2024-11-18 13:26:21.684257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:51.892 request: 00:09:51.892 { 00:09:51.892 "name": "raid_bdev1", 00:09:51.892 "raid_level": "concat", 00:09:51.892 "base_bdevs": [ 00:09:51.892 "malloc1", 00:09:51.892 "malloc2", 00:09:51.892 "malloc3" 00:09:51.892 ], 00:09:51.892 "strip_size_kb": 64, 00:09:51.892 "superblock": false, 00:09:51.892 "method": "bdev_raid_create", 00:09:51.892 "req_id": 1 00:09:51.892 } 00:09:51.892 Got JSON-RPC error response 00:09:51.892 response: 00:09:51.892 { 00:09:51.892 "code": -17, 00:09:51.892 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:51.892 } 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.892 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.893 [2024-11-18 13:26:21.745768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.893 [2024-11-18 13:26:21.745818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.893 [2024-11-18 13:26:21.745838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:51.893 [2024-11-18 13:26:21.745846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.893 [2024-11-18 13:26:21.747945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.893 [2024-11-18 13:26:21.747985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.893 [2024-11-18 13:26:21.748057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.893 [2024-11-18 13:26:21.748104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.893 pt1 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.893 "name": "raid_bdev1", 00:09:51.893 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:51.893 "strip_size_kb": 64, 00:09:51.893 "state": "configuring", 00:09:51.893 "raid_level": "concat", 00:09:51.893 "superblock": true, 00:09:51.893 "num_base_bdevs": 3, 00:09:51.893 "num_base_bdevs_discovered": 1, 00:09:51.893 "num_base_bdevs_operational": 3, 00:09:51.893 "base_bdevs_list": [ 00:09:51.893 { 00:09:51.893 "name": "pt1", 00:09:51.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.893 "is_configured": true, 00:09:51.893 "data_offset": 2048, 00:09:51.893 "data_size": 63488 00:09:51.893 }, 00:09:51.893 { 00:09:51.893 "name": null, 00:09:51.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.893 "is_configured": false, 00:09:51.893 "data_offset": 2048, 00:09:51.893 "data_size": 63488 00:09:51.893 }, 00:09:51.893 { 00:09:51.893 "name": null, 00:09:51.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.893 "is_configured": false, 00:09:51.893 "data_offset": 2048, 00:09:51.893 "data_size": 63488 00:09:51.893 } 00:09:51.893 ] 00:09:51.893 }' 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.893 13:26:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.151 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:52.151 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.151 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.151 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.410 [2024-11-18 13:26:22.205054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.410 [2024-11-18 13:26:22.205214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.410 [2024-11-18 13:26:22.205260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:52.410 [2024-11-18 13:26:22.205289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.410 [2024-11-18 13:26:22.205756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.410 [2024-11-18 13:26:22.205815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.410 [2024-11-18 13:26:22.205930] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.410 [2024-11-18 13:26:22.205979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.410 pt2 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.410 [2024-11-18 13:26:22.217029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.410 "name": "raid_bdev1", 00:09:52.410 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:52.410 "strip_size_kb": 64, 00:09:52.410 "state": "configuring", 00:09:52.410 "raid_level": "concat", 00:09:52.410 "superblock": true, 00:09:52.410 "num_base_bdevs": 3, 00:09:52.410 "num_base_bdevs_discovered": 1, 00:09:52.410 "num_base_bdevs_operational": 3, 00:09:52.410 "base_bdevs_list": [ 00:09:52.410 { 00:09:52.410 "name": "pt1", 00:09:52.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.410 "is_configured": true, 00:09:52.410 "data_offset": 2048, 00:09:52.410 "data_size": 63488 00:09:52.410 }, 00:09:52.410 { 00:09:52.410 "name": null, 00:09:52.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.410 "is_configured": false, 00:09:52.410 "data_offset": 0, 00:09:52.410 "data_size": 63488 00:09:52.410 }, 00:09:52.410 { 00:09:52.410 "name": null, 00:09:52.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.410 "is_configured": false, 00:09:52.410 "data_offset": 2048, 00:09:52.410 "data_size": 63488 00:09:52.410 } 00:09:52.410 ] 00:09:52.410 }' 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.410 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.670 [2024-11-18 13:26:22.652298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.670 [2024-11-18 13:26:22.652377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.670 [2024-11-18 13:26:22.652396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:52.670 [2024-11-18 13:26:22.652406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.670 [2024-11-18 13:26:22.652859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.670 [2024-11-18 13:26:22.652880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.670 [2024-11-18 13:26:22.652957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.670 [2024-11-18 13:26:22.652979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.670 pt2 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.670 [2024-11-18 13:26:22.664251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:52.670 [2024-11-18 13:26:22.664376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.670 [2024-11-18 13:26:22.664395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.670 [2024-11-18 13:26:22.664406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.670 [2024-11-18 13:26:22.664817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.670 [2024-11-18 13:26:22.664840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:52.670 [2024-11-18 13:26:22.664905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:52.670 [2024-11-18 13:26:22.664925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:52.670 [2024-11-18 13:26:22.665047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.670 [2024-11-18 13:26:22.665058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.670 [2024-11-18 13:26:22.665317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:52.670 [2024-11-18 13:26:22.665451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.670 [2024-11-18 13:26:22.665487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:52.670 [2024-11-18 13:26:22.665629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.670 pt3 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.670 "name": "raid_bdev1", 00:09:52.670 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:52.670 "strip_size_kb": 64, 00:09:52.670 "state": "online", 00:09:52.670 "raid_level": "concat", 00:09:52.670 "superblock": true, 00:09:52.670 "num_base_bdevs": 3, 00:09:52.670 "num_base_bdevs_discovered": 3, 00:09:52.670 "num_base_bdevs_operational": 3, 00:09:52.670 "base_bdevs_list": [ 00:09:52.670 { 00:09:52.670 "name": "pt1", 00:09:52.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.670 "is_configured": true, 00:09:52.670 "data_offset": 2048, 00:09:52.670 "data_size": 63488 00:09:52.670 }, 00:09:52.670 { 00:09:52.670 "name": "pt2", 00:09:52.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.670 "is_configured": true, 00:09:52.670 "data_offset": 2048, 00:09:52.670 "data_size": 63488 00:09:52.670 }, 00:09:52.670 { 00:09:52.670 "name": "pt3", 00:09:52.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.670 "is_configured": true, 00:09:52.670 "data_offset": 2048, 00:09:52.670 "data_size": 63488 00:09:52.670 } 00:09:52.670 ] 00:09:52.670 }' 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.670 13:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.237 [2024-11-18 13:26:23.071870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.237 "name": "raid_bdev1", 00:09:53.237 "aliases": [ 00:09:53.237 "04b98d20-f803-4c2f-a207-a962c6fc3d94" 00:09:53.237 ], 00:09:53.237 "product_name": "Raid Volume", 00:09:53.237 "block_size": 512, 00:09:53.237 "num_blocks": 190464, 00:09:53.237 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:53.237 "assigned_rate_limits": { 00:09:53.237 "rw_ios_per_sec": 0, 00:09:53.237 "rw_mbytes_per_sec": 0, 00:09:53.237 "r_mbytes_per_sec": 0, 00:09:53.237 "w_mbytes_per_sec": 0 00:09:53.237 }, 00:09:53.237 "claimed": false, 00:09:53.237 "zoned": false, 00:09:53.237 "supported_io_types": { 00:09:53.237 "read": true, 00:09:53.237 "write": true, 00:09:53.237 "unmap": true, 00:09:53.237 "flush": true, 00:09:53.237 "reset": true, 00:09:53.237 "nvme_admin": false, 00:09:53.237 "nvme_io": false, 00:09:53.237 "nvme_io_md": false, 00:09:53.237 "write_zeroes": true, 00:09:53.237 "zcopy": false, 00:09:53.237 "get_zone_info": false, 00:09:53.237 "zone_management": false, 00:09:53.237 "zone_append": false, 00:09:53.237 "compare": false, 00:09:53.237 "compare_and_write": false, 00:09:53.237 "abort": false, 00:09:53.237 "seek_hole": false, 00:09:53.237 "seek_data": false, 00:09:53.237 "copy": false, 00:09:53.237 "nvme_iov_md": false 00:09:53.237 }, 00:09:53.237 "memory_domains": [ 00:09:53.237 { 00:09:53.237 "dma_device_id": "system", 00:09:53.237 "dma_device_type": 1 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.237 "dma_device_type": 2 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "dma_device_id": "system", 00:09:53.237 "dma_device_type": 1 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.237 "dma_device_type": 2 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "dma_device_id": "system", 00:09:53.237 "dma_device_type": 1 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.237 "dma_device_type": 2 00:09:53.237 } 00:09:53.237 ], 00:09:53.237 "driver_specific": { 00:09:53.237 "raid": { 00:09:53.237 "uuid": "04b98d20-f803-4c2f-a207-a962c6fc3d94", 00:09:53.237 "strip_size_kb": 64, 00:09:53.237 "state": "online", 00:09:53.237 "raid_level": "concat", 00:09:53.237 "superblock": true, 00:09:53.237 "num_base_bdevs": 3, 00:09:53.237 "num_base_bdevs_discovered": 3, 00:09:53.237 "num_base_bdevs_operational": 3, 00:09:53.237 "base_bdevs_list": [ 00:09:53.237 { 00:09:53.237 "name": "pt1", 00:09:53.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.237 "is_configured": true, 00:09:53.237 "data_offset": 2048, 00:09:53.237 "data_size": 63488 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "name": "pt2", 00:09:53.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.237 "is_configured": true, 00:09:53.237 "data_offset": 2048, 00:09:53.237 "data_size": 63488 00:09:53.237 }, 00:09:53.237 { 00:09:53.237 "name": "pt3", 00:09:53.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.237 "is_configured": true, 00:09:53.237 "data_offset": 2048, 00:09:53.237 "data_size": 63488 00:09:53.237 } 00:09:53.237 ] 00:09:53.237 } 00:09:53.237 } 00:09:53.237 }' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.237 pt2 00:09:53.237 pt3' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.237 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.238 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.497 [2024-11-18 13:26:23.339345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 04b98d20-f803-4c2f-a207-a962c6fc3d94 '!=' 04b98d20-f803-4c2f-a207-a962c6fc3d94 ']' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66876 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66876 ']' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66876 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66876 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66876' 00:09:53.497 killing process with pid 66876 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66876 00:09:53.497 [2024-11-18 13:26:23.426380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.497 [2024-11-18 13:26:23.426543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.497 13:26:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66876 00:09:53.497 [2024-11-18 13:26:23.426631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.497 [2024-11-18 13:26:23.426647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:53.756 [2024-11-18 13:26:23.726296] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.131 13:26:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:55.131 00:09:55.131 real 0m5.229s 00:09:55.131 user 0m7.460s 00:09:55.131 sys 0m0.969s 00:09:55.131 13:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.131 13:26:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 ************************************ 00:09:55.131 END TEST raid_superblock_test 00:09:55.131 ************************************ 00:09:55.131 13:26:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:55.131 13:26:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.131 13:26:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.131 13:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 ************************************ 00:09:55.131 START TEST raid_read_error_test 00:09:55.131 ************************************ 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.131 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.E5emOgfUD7 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67128 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67128 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67128 ']' 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.132 13:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.132 [2024-11-18 13:26:25.001867] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:55.132 [2024-11-18 13:26:25.001973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67128 ] 00:09:55.132 [2024-11-18 13:26:25.168011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.391 [2024-11-18 13:26:25.277867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.650 [2024-11-18 13:26:25.480759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.650 [2024-11-18 13:26:25.480822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.909 BaseBdev1_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.909 true 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.909 [2024-11-18 13:26:25.909708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.909 [2024-11-18 13:26:25.909773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.909 [2024-11-18 13:26:25.909791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.909 [2024-11-18 13:26:25.909803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.909 [2024-11-18 13:26:25.911900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.909 [2024-11-18 13:26:25.912036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.909 BaseBdev1 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.909 BaseBdev2_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.909 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 true 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 [2024-11-18 13:26:25.975811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.169 [2024-11-18 13:26:25.975867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.169 [2024-11-18 13:26:25.975883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.169 [2024-11-18 13:26:25.975894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.169 [2024-11-18 13:26:25.977865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.169 [2024-11-18 13:26:25.977907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.169 BaseBdev2 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 BaseBdev3_malloc 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 true 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 [2024-11-18 13:26:26.063232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.169 [2024-11-18 13:26:26.063286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.169 [2024-11-18 13:26:26.063303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.169 [2024-11-18 13:26:26.063313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.169 [2024-11-18 13:26:26.065320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.169 [2024-11-18 13:26:26.065359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.169 BaseBdev3 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 [2024-11-18 13:26:26.075290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.169 [2024-11-18 13:26:26.076985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.169 [2024-11-18 13:26:26.077064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.169 [2024-11-18 13:26:26.077263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.169 [2024-11-18 13:26:26.077275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:56.169 [2024-11-18 13:26:26.077498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:56.169 [2024-11-18 13:26:26.077635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.169 [2024-11-18 13:26:26.077648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.169 [2024-11-18 13:26:26.077782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.169 "name": "raid_bdev1", 00:09:56.169 "uuid": "8d537265-988f-417f-b855-75c59a3d1de3", 00:09:56.169 "strip_size_kb": 64, 00:09:56.169 "state": "online", 00:09:56.169 "raid_level": "concat", 00:09:56.169 "superblock": true, 00:09:56.169 "num_base_bdevs": 3, 00:09:56.169 "num_base_bdevs_discovered": 3, 00:09:56.169 "num_base_bdevs_operational": 3, 00:09:56.169 "base_bdevs_list": [ 00:09:56.169 { 00:09:56.169 "name": "BaseBdev1", 00:09:56.169 "uuid": "b2976ec7-b73b-5ebb-9729-97ba695f8e60", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 2048, 00:09:56.169 "data_size": 63488 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "name": "BaseBdev2", 00:09:56.169 "uuid": "dd4ab110-260d-55f5-9377-2902b7142680", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 2048, 00:09:56.169 "data_size": 63488 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "name": "BaseBdev3", 00:09:56.169 "uuid": "ea42973a-5f46-549d-835b-80db0ca9b6d1", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 2048, 00:09:56.169 "data_size": 63488 00:09:56.169 } 00:09:56.169 ] 00:09:56.169 }' 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.169 13:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.737 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.737 13:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.737 [2024-11-18 13:26:26.603850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:57.703 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.704 "name": "raid_bdev1", 00:09:57.704 "uuid": "8d537265-988f-417f-b855-75c59a3d1de3", 00:09:57.704 "strip_size_kb": 64, 00:09:57.704 "state": "online", 00:09:57.704 "raid_level": "concat", 00:09:57.704 "superblock": true, 00:09:57.704 "num_base_bdevs": 3, 00:09:57.704 "num_base_bdevs_discovered": 3, 00:09:57.704 "num_base_bdevs_operational": 3, 00:09:57.704 "base_bdevs_list": [ 00:09:57.704 { 00:09:57.704 "name": "BaseBdev1", 00:09:57.704 "uuid": "b2976ec7-b73b-5ebb-9729-97ba695f8e60", 00:09:57.704 "is_configured": true, 00:09:57.704 "data_offset": 2048, 00:09:57.704 "data_size": 63488 00:09:57.704 }, 00:09:57.704 { 00:09:57.704 "name": "BaseBdev2", 00:09:57.704 "uuid": "dd4ab110-260d-55f5-9377-2902b7142680", 00:09:57.704 "is_configured": true, 00:09:57.704 "data_offset": 2048, 00:09:57.704 "data_size": 63488 00:09:57.704 }, 00:09:57.704 { 00:09:57.704 "name": "BaseBdev3", 00:09:57.704 "uuid": "ea42973a-5f46-549d-835b-80db0ca9b6d1", 00:09:57.704 "is_configured": true, 00:09:57.704 "data_offset": 2048, 00:09:57.704 "data_size": 63488 00:09:57.704 } 00:09:57.704 ] 00:09:57.704 }' 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.704 13:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.964 13:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.964 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.964 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.964 [2024-11-18 13:26:28.010517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.964 [2024-11-18 13:26:28.010646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.964 [2024-11-18 13:26:28.013284] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.964 [2024-11-18 13:26:28.013329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.964 [2024-11-18 13:26:28.013364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.964 [2024-11-18 13:26:28.013376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:57.964 { 00:09:57.964 "results": [ 00:09:57.964 { 00:09:57.964 "job": "raid_bdev1", 00:09:57.964 "core_mask": "0x1", 00:09:57.964 "workload": "randrw", 00:09:57.964 "percentage": 50, 00:09:57.964 "status": "finished", 00:09:57.964 "queue_depth": 1, 00:09:57.964 "io_size": 131072, 00:09:57.964 "runtime": 1.407705, 00:09:57.965 "iops": 16229.962953885935, 00:09:57.965 "mibps": 2028.745369235742, 00:09:57.965 "io_failed": 1, 00:09:57.965 "io_timeout": 0, 00:09:57.965 "avg_latency_us": 85.668464765819, 00:09:57.965 "min_latency_us": 25.9353711790393, 00:09:57.965 "max_latency_us": 1345.0620087336245 00:09:57.965 } 00:09:57.965 ], 00:09:57.965 "core_count": 1 00:09:57.965 } 00:09:57.965 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.965 13:26:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67128 00:09:57.965 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67128 ']' 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67128 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67128 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67128' 00:09:58.225 killing process with pid 67128 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67128 00:09:58.225 [2024-11-18 13:26:28.053205] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.225 13:26:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67128 00:09:58.485 [2024-11-18 13:26:28.278657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.E5emOgfUD7 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.423 13:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:59.423 ************************************ 00:09:59.423 END TEST raid_read_error_test 00:09:59.423 ************************************ 00:09:59.423 00:09:59.424 real 0m4.544s 00:09:59.424 user 0m5.408s 00:09:59.424 sys 0m0.583s 00:09:59.424 13:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.424 13:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 13:26:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:59.684 13:26:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:59.684 13:26:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.684 13:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 ************************************ 00:09:59.684 START TEST raid_write_error_test 00:09:59.684 ************************************ 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XuUtVAQPDF 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67275 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67275 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67275 ']' 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.684 13:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.684 [2024-11-18 13:26:29.620502] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:59.684 [2024-11-18 13:26:29.620747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67275 ] 00:09:59.945 [2024-11-18 13:26:29.802051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.945 [2024-11-18 13:26:29.916835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.206 [2024-11-18 13:26:30.112316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.206 [2024-11-18 13:26:30.112455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.466 BaseBdev1_malloc 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.466 true 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.466 [2024-11-18 13:26:30.502766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.466 [2024-11-18 13:26:30.502840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.466 [2024-11-18 13:26:30.502865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:00.466 [2024-11-18 13:26:30.502878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.466 [2024-11-18 13:26:30.505161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.466 [2024-11-18 13:26:30.505201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.466 BaseBdev1 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.466 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 BaseBdev2_malloc 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 true 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 [2024-11-18 13:26:30.570844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:00.727 [2024-11-18 13:26:30.570912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.727 [2024-11-18 13:26:30.570928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:00.727 [2024-11-18 13:26:30.570939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.727 [2024-11-18 13:26:30.572958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.727 [2024-11-18 13:26:30.573001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:00.727 BaseBdev2 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 BaseBdev3_malloc 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 true 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.727 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.727 [2024-11-18 13:26:30.648648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:00.727 [2024-11-18 13:26:30.648716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.727 [2024-11-18 13:26:30.648735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:00.727 [2024-11-18 13:26:30.648746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.727 [2024-11-18 13:26:30.650818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.727 [2024-11-18 13:26:30.650860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:00.728 BaseBdev3 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.728 [2024-11-18 13:26:30.660705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.728 [2024-11-18 13:26:30.662605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.728 [2024-11-18 13:26:30.662682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.728 [2024-11-18 13:26:30.662879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:00.728 [2024-11-18 13:26:30.662892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:00.728 [2024-11-18 13:26:30.663153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:00.728 [2024-11-18 13:26:30.663306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:00.728 [2024-11-18 13:26:30.663319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:00.728 [2024-11-18 13:26:30.663471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.728 "name": "raid_bdev1", 00:10:00.728 "uuid": "18841f37-5844-4ceb-8692-0fcbbf57f33a", 00:10:00.728 "strip_size_kb": 64, 00:10:00.728 "state": "online", 00:10:00.728 "raid_level": "concat", 00:10:00.728 "superblock": true, 00:10:00.728 "num_base_bdevs": 3, 00:10:00.728 "num_base_bdevs_discovered": 3, 00:10:00.728 "num_base_bdevs_operational": 3, 00:10:00.728 "base_bdevs_list": [ 00:10:00.728 { 00:10:00.728 "name": "BaseBdev1", 00:10:00.728 "uuid": "7f532566-1011-5ecc-bff6-654a9b4542a3", 00:10:00.728 "is_configured": true, 00:10:00.728 "data_offset": 2048, 00:10:00.728 "data_size": 63488 00:10:00.728 }, 00:10:00.728 { 00:10:00.728 "name": "BaseBdev2", 00:10:00.728 "uuid": "fcf211f7-399c-5d18-969c-e4f5bfc5c087", 00:10:00.728 "is_configured": true, 00:10:00.728 "data_offset": 2048, 00:10:00.728 "data_size": 63488 00:10:00.728 }, 00:10:00.728 { 00:10:00.728 "name": "BaseBdev3", 00:10:00.728 "uuid": "904df42d-f76b-58ca-b053-2206a624b277", 00:10:00.728 "is_configured": true, 00:10:00.728 "data_offset": 2048, 00:10:00.728 "data_size": 63488 00:10:00.728 } 00:10:00.728 ] 00:10:00.728 }' 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.728 13:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.299 13:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.299 13:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.299 [2024-11-18 13:26:31.205176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.237 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.238 "name": "raid_bdev1", 00:10:02.238 "uuid": "18841f37-5844-4ceb-8692-0fcbbf57f33a", 00:10:02.238 "strip_size_kb": 64, 00:10:02.238 "state": "online", 00:10:02.238 "raid_level": "concat", 00:10:02.238 "superblock": true, 00:10:02.238 "num_base_bdevs": 3, 00:10:02.238 "num_base_bdevs_discovered": 3, 00:10:02.238 "num_base_bdevs_operational": 3, 00:10:02.238 "base_bdevs_list": [ 00:10:02.238 { 00:10:02.238 "name": "BaseBdev1", 00:10:02.238 "uuid": "7f532566-1011-5ecc-bff6-654a9b4542a3", 00:10:02.238 "is_configured": true, 00:10:02.238 "data_offset": 2048, 00:10:02.238 "data_size": 63488 00:10:02.238 }, 00:10:02.238 { 00:10:02.238 "name": "BaseBdev2", 00:10:02.238 "uuid": "fcf211f7-399c-5d18-969c-e4f5bfc5c087", 00:10:02.238 "is_configured": true, 00:10:02.238 "data_offset": 2048, 00:10:02.238 "data_size": 63488 00:10:02.238 }, 00:10:02.238 { 00:10:02.238 "name": "BaseBdev3", 00:10:02.238 "uuid": "904df42d-f76b-58ca-b053-2206a624b277", 00:10:02.238 "is_configured": true, 00:10:02.238 "data_offset": 2048, 00:10:02.238 "data_size": 63488 00:10:02.238 } 00:10:02.238 ] 00:10:02.238 }' 00:10:02.238 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.238 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.806 [2024-11-18 13:26:32.571333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.806 [2024-11-18 13:26:32.571370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.806 [2024-11-18 13:26:32.573957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.806 [2024-11-18 13:26:32.574002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.806 [2024-11-18 13:26:32.574038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.806 [2024-11-18 13:26:32.574049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:02.806 { 00:10:02.806 "results": [ 00:10:02.806 { 00:10:02.806 "job": "raid_bdev1", 00:10:02.806 "core_mask": "0x1", 00:10:02.806 "workload": "randrw", 00:10:02.806 "percentage": 50, 00:10:02.806 "status": "finished", 00:10:02.806 "queue_depth": 1, 00:10:02.806 "io_size": 131072, 00:10:02.806 "runtime": 1.366856, 00:10:02.806 "iops": 16326.518667657749, 00:10:02.806 "mibps": 2040.8148334572186, 00:10:02.806 "io_failed": 1, 00:10:02.806 "io_timeout": 0, 00:10:02.806 "avg_latency_us": 85.04136439743881, 00:10:02.806 "min_latency_us": 24.929257641921396, 00:10:02.806 "max_latency_us": 1380.8349344978167 00:10:02.806 } 00:10:02.806 ], 00:10:02.806 "core_count": 1 00:10:02.806 } 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67275 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67275 ']' 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67275 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67275 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.806 killing process with pid 67275 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67275' 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67275 00:10:02.806 [2024-11-18 13:26:32.607197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.806 13:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67275 00:10:02.806 [2024-11-18 13:26:32.829090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XuUtVAQPDF 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:04.186 00:10:04.186 real 0m4.471s 00:10:04.186 user 0m5.292s 00:10:04.186 sys 0m0.574s 00:10:04.186 ************************************ 00:10:04.186 END TEST raid_write_error_test 00:10:04.186 ************************************ 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.186 13:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.186 13:26:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:04.186 13:26:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:04.186 13:26:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.186 13:26:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.186 13:26:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.186 ************************************ 00:10:04.186 START TEST raid_state_function_test 00:10:04.186 ************************************ 00:10:04.186 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:04.186 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.186 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:04.186 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:04.186 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67413 00:10:04.187 Process raid pid: 67413 00:10:04.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67413' 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67413 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67413 ']' 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.187 13:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.187 [2024-11-18 13:26:34.152410] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:04.187 [2024-11-18 13:26:34.152663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.446 [2024-11-18 13:26:34.332986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.446 [2024-11-18 13:26:34.447732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.706 [2024-11-18 13:26:34.648974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.706 [2024-11-18 13:26:34.649058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.274 [2024-11-18 13:26:35.034292] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.274 [2024-11-18 13:26:35.034356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.274 [2024-11-18 13:26:35.034373] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.274 [2024-11-18 13:26:35.034382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.274 [2024-11-18 13:26:35.034388] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.274 [2024-11-18 13:26:35.034413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.274 "name": "Existed_Raid", 00:10:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.274 "strip_size_kb": 0, 00:10:05.274 "state": "configuring", 00:10:05.274 "raid_level": "raid1", 00:10:05.274 "superblock": false, 00:10:05.274 "num_base_bdevs": 3, 00:10:05.274 "num_base_bdevs_discovered": 0, 00:10:05.274 "num_base_bdevs_operational": 3, 00:10:05.274 "base_bdevs_list": [ 00:10:05.274 { 00:10:05.274 "name": "BaseBdev1", 00:10:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.274 "is_configured": false, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 0 00:10:05.274 }, 00:10:05.274 { 00:10:05.274 "name": "BaseBdev2", 00:10:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.274 "is_configured": false, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 0 00:10:05.274 }, 00:10:05.274 { 00:10:05.274 "name": "BaseBdev3", 00:10:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.274 "is_configured": false, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 0 00:10:05.274 } 00:10:05.274 ] 00:10:05.274 }' 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.274 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 [2024-11-18 13:26:35.493458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.534 [2024-11-18 13:26:35.493584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 [2024-11-18 13:26:35.501448] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.534 [2024-11-18 13:26:35.501500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.534 [2024-11-18 13:26:35.501511] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.534 [2024-11-18 13:26:35.501522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.534 [2024-11-18 13:26:35.501530] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.534 [2024-11-18 13:26:35.501540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 [2024-11-18 13:26:35.545968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.534 BaseBdev1 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.534 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 [ 00:10:05.534 { 00:10:05.534 "name": "BaseBdev1", 00:10:05.534 "aliases": [ 00:10:05.534 "d99dd66d-8d64-4724-9578-53054f6eb0f4" 00:10:05.534 ], 00:10:05.534 "product_name": "Malloc disk", 00:10:05.534 "block_size": 512, 00:10:05.534 "num_blocks": 65536, 00:10:05.534 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:05.534 "assigned_rate_limits": { 00:10:05.534 "rw_ios_per_sec": 0, 00:10:05.534 "rw_mbytes_per_sec": 0, 00:10:05.534 "r_mbytes_per_sec": 0, 00:10:05.534 "w_mbytes_per_sec": 0 00:10:05.534 }, 00:10:05.534 "claimed": true, 00:10:05.534 "claim_type": "exclusive_write", 00:10:05.534 "zoned": false, 00:10:05.534 "supported_io_types": { 00:10:05.534 "read": true, 00:10:05.534 "write": true, 00:10:05.534 "unmap": true, 00:10:05.534 "flush": true, 00:10:05.534 "reset": true, 00:10:05.534 "nvme_admin": false, 00:10:05.534 "nvme_io": false, 00:10:05.534 "nvme_io_md": false, 00:10:05.534 "write_zeroes": true, 00:10:05.534 "zcopy": true, 00:10:05.534 "get_zone_info": false, 00:10:05.534 "zone_management": false, 00:10:05.534 "zone_append": false, 00:10:05.534 "compare": false, 00:10:05.534 "compare_and_write": false, 00:10:05.534 "abort": true, 00:10:05.534 "seek_hole": false, 00:10:05.534 "seek_data": false, 00:10:05.534 "copy": true, 00:10:05.534 "nvme_iov_md": false 00:10:05.534 }, 00:10:05.534 "memory_domains": [ 00:10:05.534 { 00:10:05.534 "dma_device_id": "system", 00:10:05.535 "dma_device_type": 1 00:10:05.535 }, 00:10:05.535 { 00:10:05.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.535 "dma_device_type": 2 00:10:05.535 } 00:10:05.535 ], 00:10:05.535 "driver_specific": {} 00:10:05.535 } 00:10:05.535 ] 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.535 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.794 "name": "Existed_Raid", 00:10:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.794 "strip_size_kb": 0, 00:10:05.794 "state": "configuring", 00:10:05.794 "raid_level": "raid1", 00:10:05.794 "superblock": false, 00:10:05.794 "num_base_bdevs": 3, 00:10:05.794 "num_base_bdevs_discovered": 1, 00:10:05.794 "num_base_bdevs_operational": 3, 00:10:05.794 "base_bdevs_list": [ 00:10:05.794 { 00:10:05.794 "name": "BaseBdev1", 00:10:05.794 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:05.794 "is_configured": true, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 65536 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "name": "BaseBdev2", 00:10:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.794 "is_configured": false, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 0 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "name": "BaseBdev3", 00:10:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.794 "is_configured": false, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 0 00:10:05.794 } 00:10:05.794 ] 00:10:05.794 }' 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.794 13:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-11-18 13:26:36.049173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.054 [2024-11-18 13:26:36.049314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-11-18 13:26:36.057186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.054 [2024-11-18 13:26:36.059059] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.054 [2024-11-18 13:26:36.059109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.054 [2024-11-18 13:26:36.059120] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.054 [2024-11-18 13:26:36.059146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.054 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.312 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.312 "name": "Existed_Raid", 00:10:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.312 "strip_size_kb": 0, 00:10:06.312 "state": "configuring", 00:10:06.312 "raid_level": "raid1", 00:10:06.312 "superblock": false, 00:10:06.312 "num_base_bdevs": 3, 00:10:06.312 "num_base_bdevs_discovered": 1, 00:10:06.312 "num_base_bdevs_operational": 3, 00:10:06.312 "base_bdevs_list": [ 00:10:06.312 { 00:10:06.312 "name": "BaseBdev1", 00:10:06.312 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:06.312 "is_configured": true, 00:10:06.312 "data_offset": 0, 00:10:06.312 "data_size": 65536 00:10:06.312 }, 00:10:06.312 { 00:10:06.312 "name": "BaseBdev2", 00:10:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.312 "is_configured": false, 00:10:06.312 "data_offset": 0, 00:10:06.312 "data_size": 0 00:10:06.312 }, 00:10:06.312 { 00:10:06.312 "name": "BaseBdev3", 00:10:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.313 "is_configured": false, 00:10:06.313 "data_offset": 0, 00:10:06.313 "data_size": 0 00:10:06.313 } 00:10:06.313 ] 00:10:06.313 }' 00:10:06.313 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.313 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.572 [2024-11-18 13:26:36.497901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.572 BaseBdev2 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.572 [ 00:10:06.572 { 00:10:06.572 "name": "BaseBdev2", 00:10:06.572 "aliases": [ 00:10:06.572 "4958352f-b617-4be9-b87f-9cdfaaf242b6" 00:10:06.572 ], 00:10:06.572 "product_name": "Malloc disk", 00:10:06.572 "block_size": 512, 00:10:06.572 "num_blocks": 65536, 00:10:06.572 "uuid": "4958352f-b617-4be9-b87f-9cdfaaf242b6", 00:10:06.572 "assigned_rate_limits": { 00:10:06.572 "rw_ios_per_sec": 0, 00:10:06.572 "rw_mbytes_per_sec": 0, 00:10:06.572 "r_mbytes_per_sec": 0, 00:10:06.572 "w_mbytes_per_sec": 0 00:10:06.572 }, 00:10:06.572 "claimed": true, 00:10:06.572 "claim_type": "exclusive_write", 00:10:06.572 "zoned": false, 00:10:06.572 "supported_io_types": { 00:10:06.572 "read": true, 00:10:06.572 "write": true, 00:10:06.572 "unmap": true, 00:10:06.572 "flush": true, 00:10:06.572 "reset": true, 00:10:06.572 "nvme_admin": false, 00:10:06.572 "nvme_io": false, 00:10:06.572 "nvme_io_md": false, 00:10:06.572 "write_zeroes": true, 00:10:06.572 "zcopy": true, 00:10:06.572 "get_zone_info": false, 00:10:06.572 "zone_management": false, 00:10:06.572 "zone_append": false, 00:10:06.572 "compare": false, 00:10:06.572 "compare_and_write": false, 00:10:06.572 "abort": true, 00:10:06.572 "seek_hole": false, 00:10:06.572 "seek_data": false, 00:10:06.572 "copy": true, 00:10:06.572 "nvme_iov_md": false 00:10:06.572 }, 00:10:06.572 "memory_domains": [ 00:10:06.572 { 00:10:06.572 "dma_device_id": "system", 00:10:06.572 "dma_device_type": 1 00:10:06.572 }, 00:10:06.572 { 00:10:06.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.572 "dma_device_type": 2 00:10:06.572 } 00:10:06.572 ], 00:10:06.572 "driver_specific": {} 00:10:06.572 } 00:10:06.572 ] 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.572 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.573 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.573 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.573 "name": "Existed_Raid", 00:10:06.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.573 "strip_size_kb": 0, 00:10:06.573 "state": "configuring", 00:10:06.573 "raid_level": "raid1", 00:10:06.573 "superblock": false, 00:10:06.573 "num_base_bdevs": 3, 00:10:06.573 "num_base_bdevs_discovered": 2, 00:10:06.573 "num_base_bdevs_operational": 3, 00:10:06.573 "base_bdevs_list": [ 00:10:06.573 { 00:10:06.573 "name": "BaseBdev1", 00:10:06.573 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:06.573 "is_configured": true, 00:10:06.573 "data_offset": 0, 00:10:06.573 "data_size": 65536 00:10:06.573 }, 00:10:06.573 { 00:10:06.573 "name": "BaseBdev2", 00:10:06.573 "uuid": "4958352f-b617-4be9-b87f-9cdfaaf242b6", 00:10:06.573 "is_configured": true, 00:10:06.573 "data_offset": 0, 00:10:06.573 "data_size": 65536 00:10:06.573 }, 00:10:06.573 { 00:10:06.573 "name": "BaseBdev3", 00:10:06.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.573 "is_configured": false, 00:10:06.573 "data_offset": 0, 00:10:06.573 "data_size": 0 00:10:06.573 } 00:10:06.573 ] 00:10:06.573 }' 00:10:06.573 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.573 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.142 13:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.142 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.142 13:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.142 [2024-11-18 13:26:37.046064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.142 [2024-11-18 13:26:37.046191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.142 [2024-11-18 13:26:37.046223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:07.142 [2024-11-18 13:26:37.046550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.142 [2024-11-18 13:26:37.046754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.142 [2024-11-18 13:26:37.046793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.142 [2024-11-18 13:26:37.047083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.142 BaseBdev3 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.142 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.142 [ 00:10:07.142 { 00:10:07.142 "name": "BaseBdev3", 00:10:07.142 "aliases": [ 00:10:07.143 "257a8440-9344-4e70-b647-a8be4a7042a1" 00:10:07.143 ], 00:10:07.143 "product_name": "Malloc disk", 00:10:07.143 "block_size": 512, 00:10:07.143 "num_blocks": 65536, 00:10:07.143 "uuid": "257a8440-9344-4e70-b647-a8be4a7042a1", 00:10:07.143 "assigned_rate_limits": { 00:10:07.143 "rw_ios_per_sec": 0, 00:10:07.143 "rw_mbytes_per_sec": 0, 00:10:07.143 "r_mbytes_per_sec": 0, 00:10:07.143 "w_mbytes_per_sec": 0 00:10:07.143 }, 00:10:07.143 "claimed": true, 00:10:07.143 "claim_type": "exclusive_write", 00:10:07.143 "zoned": false, 00:10:07.143 "supported_io_types": { 00:10:07.143 "read": true, 00:10:07.143 "write": true, 00:10:07.143 "unmap": true, 00:10:07.143 "flush": true, 00:10:07.143 "reset": true, 00:10:07.143 "nvme_admin": false, 00:10:07.143 "nvme_io": false, 00:10:07.143 "nvme_io_md": false, 00:10:07.143 "write_zeroes": true, 00:10:07.143 "zcopy": true, 00:10:07.143 "get_zone_info": false, 00:10:07.143 "zone_management": false, 00:10:07.143 "zone_append": false, 00:10:07.143 "compare": false, 00:10:07.143 "compare_and_write": false, 00:10:07.143 "abort": true, 00:10:07.143 "seek_hole": false, 00:10:07.143 "seek_data": false, 00:10:07.143 "copy": true, 00:10:07.143 "nvme_iov_md": false 00:10:07.143 }, 00:10:07.143 "memory_domains": [ 00:10:07.143 { 00:10:07.143 "dma_device_id": "system", 00:10:07.143 "dma_device_type": 1 00:10:07.143 }, 00:10:07.143 { 00:10:07.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.143 "dma_device_type": 2 00:10:07.143 } 00:10:07.143 ], 00:10:07.143 "driver_specific": {} 00:10:07.143 } 00:10:07.143 ] 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.143 "name": "Existed_Raid", 00:10:07.143 "uuid": "a94886d1-663b-4aa7-81af-6f37c0dc2777", 00:10:07.143 "strip_size_kb": 0, 00:10:07.143 "state": "online", 00:10:07.143 "raid_level": "raid1", 00:10:07.143 "superblock": false, 00:10:07.143 "num_base_bdevs": 3, 00:10:07.143 "num_base_bdevs_discovered": 3, 00:10:07.143 "num_base_bdevs_operational": 3, 00:10:07.143 "base_bdevs_list": [ 00:10:07.143 { 00:10:07.143 "name": "BaseBdev1", 00:10:07.143 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:07.143 "is_configured": true, 00:10:07.143 "data_offset": 0, 00:10:07.143 "data_size": 65536 00:10:07.143 }, 00:10:07.143 { 00:10:07.143 "name": "BaseBdev2", 00:10:07.143 "uuid": "4958352f-b617-4be9-b87f-9cdfaaf242b6", 00:10:07.143 "is_configured": true, 00:10:07.143 "data_offset": 0, 00:10:07.143 "data_size": 65536 00:10:07.143 }, 00:10:07.143 { 00:10:07.143 "name": "BaseBdev3", 00:10:07.143 "uuid": "257a8440-9344-4e70-b647-a8be4a7042a1", 00:10:07.143 "is_configured": true, 00:10:07.143 "data_offset": 0, 00:10:07.143 "data_size": 65536 00:10:07.143 } 00:10:07.143 ] 00:10:07.143 }' 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.143 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.712 [2024-11-18 13:26:37.497705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.712 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.712 "name": "Existed_Raid", 00:10:07.712 "aliases": [ 00:10:07.712 "a94886d1-663b-4aa7-81af-6f37c0dc2777" 00:10:07.712 ], 00:10:07.712 "product_name": "Raid Volume", 00:10:07.712 "block_size": 512, 00:10:07.712 "num_blocks": 65536, 00:10:07.712 "uuid": "a94886d1-663b-4aa7-81af-6f37c0dc2777", 00:10:07.712 "assigned_rate_limits": { 00:10:07.712 "rw_ios_per_sec": 0, 00:10:07.712 "rw_mbytes_per_sec": 0, 00:10:07.712 "r_mbytes_per_sec": 0, 00:10:07.712 "w_mbytes_per_sec": 0 00:10:07.712 }, 00:10:07.712 "claimed": false, 00:10:07.712 "zoned": false, 00:10:07.712 "supported_io_types": { 00:10:07.712 "read": true, 00:10:07.712 "write": true, 00:10:07.712 "unmap": false, 00:10:07.712 "flush": false, 00:10:07.712 "reset": true, 00:10:07.712 "nvme_admin": false, 00:10:07.712 "nvme_io": false, 00:10:07.712 "nvme_io_md": false, 00:10:07.712 "write_zeroes": true, 00:10:07.712 "zcopy": false, 00:10:07.712 "get_zone_info": false, 00:10:07.712 "zone_management": false, 00:10:07.712 "zone_append": false, 00:10:07.712 "compare": false, 00:10:07.712 "compare_and_write": false, 00:10:07.712 "abort": false, 00:10:07.712 "seek_hole": false, 00:10:07.712 "seek_data": false, 00:10:07.712 "copy": false, 00:10:07.712 "nvme_iov_md": false 00:10:07.712 }, 00:10:07.712 "memory_domains": [ 00:10:07.712 { 00:10:07.712 "dma_device_id": "system", 00:10:07.712 "dma_device_type": 1 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.712 "dma_device_type": 2 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "dma_device_id": "system", 00:10:07.712 "dma_device_type": 1 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.712 "dma_device_type": 2 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "dma_device_id": "system", 00:10:07.712 "dma_device_type": 1 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.712 "dma_device_type": 2 00:10:07.712 } 00:10:07.712 ], 00:10:07.712 "driver_specific": { 00:10:07.712 "raid": { 00:10:07.712 "uuid": "a94886d1-663b-4aa7-81af-6f37c0dc2777", 00:10:07.712 "strip_size_kb": 0, 00:10:07.712 "state": "online", 00:10:07.712 "raid_level": "raid1", 00:10:07.712 "superblock": false, 00:10:07.712 "num_base_bdevs": 3, 00:10:07.712 "num_base_bdevs_discovered": 3, 00:10:07.712 "num_base_bdevs_operational": 3, 00:10:07.712 "base_bdevs_list": [ 00:10:07.712 { 00:10:07.712 "name": "BaseBdev1", 00:10:07.712 "uuid": "d99dd66d-8d64-4724-9578-53054f6eb0f4", 00:10:07.712 "is_configured": true, 00:10:07.712 "data_offset": 0, 00:10:07.712 "data_size": 65536 00:10:07.712 }, 00:10:07.712 { 00:10:07.712 "name": "BaseBdev2", 00:10:07.713 "uuid": "4958352f-b617-4be9-b87f-9cdfaaf242b6", 00:10:07.713 "is_configured": true, 00:10:07.713 "data_offset": 0, 00:10:07.713 "data_size": 65536 00:10:07.713 }, 00:10:07.713 { 00:10:07.713 "name": "BaseBdev3", 00:10:07.713 "uuid": "257a8440-9344-4e70-b647-a8be4a7042a1", 00:10:07.713 "is_configured": true, 00:10:07.713 "data_offset": 0, 00:10:07.713 "data_size": 65536 00:10:07.713 } 00:10:07.713 ] 00:10:07.713 } 00:10:07.713 } 00:10:07.713 }' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.713 BaseBdev2 00:10:07.713 BaseBdev3' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.713 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 [2024-11-18 13:26:37.772909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.972 "name": "Existed_Raid", 00:10:07.972 "uuid": "a94886d1-663b-4aa7-81af-6f37c0dc2777", 00:10:07.972 "strip_size_kb": 0, 00:10:07.972 "state": "online", 00:10:07.972 "raid_level": "raid1", 00:10:07.972 "superblock": false, 00:10:07.972 "num_base_bdevs": 3, 00:10:07.972 "num_base_bdevs_discovered": 2, 00:10:07.972 "num_base_bdevs_operational": 2, 00:10:07.972 "base_bdevs_list": [ 00:10:07.972 { 00:10:07.972 "name": null, 00:10:07.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.972 "is_configured": false, 00:10:07.972 "data_offset": 0, 00:10:07.972 "data_size": 65536 00:10:07.972 }, 00:10:07.972 { 00:10:07.972 "name": "BaseBdev2", 00:10:07.972 "uuid": "4958352f-b617-4be9-b87f-9cdfaaf242b6", 00:10:07.972 "is_configured": true, 00:10:07.972 "data_offset": 0, 00:10:07.972 "data_size": 65536 00:10:07.972 }, 00:10:07.972 { 00:10:07.972 "name": "BaseBdev3", 00:10:07.972 "uuid": "257a8440-9344-4e70-b647-a8be4a7042a1", 00:10:07.972 "is_configured": true, 00:10:07.972 "data_offset": 0, 00:10:07.972 "data_size": 65536 00:10:07.972 } 00:10:07.972 ] 00:10:07.972 }' 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.972 13:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 [2024-11-18 13:26:38.349654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.541 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.541 [2024-11-18 13:26:38.503035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.541 [2024-11-18 13:26:38.503208] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.801 [2024-11-18 13:26:38.597005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.801 [2024-11-18 13:26:38.597154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.801 [2024-11-18 13:26:38.597173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.801 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.801 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.801 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.801 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.801 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 BaseBdev2 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 [ 00:10:08.802 { 00:10:08.802 "name": "BaseBdev2", 00:10:08.802 "aliases": [ 00:10:08.802 "989b41f7-a599-4e97-b318-0466c9e9760c" 00:10:08.802 ], 00:10:08.802 "product_name": "Malloc disk", 00:10:08.802 "block_size": 512, 00:10:08.802 "num_blocks": 65536, 00:10:08.802 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:08.802 "assigned_rate_limits": { 00:10:08.802 "rw_ios_per_sec": 0, 00:10:08.802 "rw_mbytes_per_sec": 0, 00:10:08.802 "r_mbytes_per_sec": 0, 00:10:08.802 "w_mbytes_per_sec": 0 00:10:08.802 }, 00:10:08.802 "claimed": false, 00:10:08.802 "zoned": false, 00:10:08.802 "supported_io_types": { 00:10:08.802 "read": true, 00:10:08.802 "write": true, 00:10:08.802 "unmap": true, 00:10:08.802 "flush": true, 00:10:08.802 "reset": true, 00:10:08.802 "nvme_admin": false, 00:10:08.802 "nvme_io": false, 00:10:08.802 "nvme_io_md": false, 00:10:08.802 "write_zeroes": true, 00:10:08.802 "zcopy": true, 00:10:08.802 "get_zone_info": false, 00:10:08.802 "zone_management": false, 00:10:08.802 "zone_append": false, 00:10:08.802 "compare": false, 00:10:08.802 "compare_and_write": false, 00:10:08.802 "abort": true, 00:10:08.802 "seek_hole": false, 00:10:08.802 "seek_data": false, 00:10:08.802 "copy": true, 00:10:08.802 "nvme_iov_md": false 00:10:08.802 }, 00:10:08.802 "memory_domains": [ 00:10:08.802 { 00:10:08.802 "dma_device_id": "system", 00:10:08.802 "dma_device_type": 1 00:10:08.802 }, 00:10:08.802 { 00:10:08.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.802 "dma_device_type": 2 00:10:08.802 } 00:10:08.802 ], 00:10:08.802 "driver_specific": {} 00:10:08.802 } 00:10:08.802 ] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 BaseBdev3 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 [ 00:10:08.802 { 00:10:08.802 "name": "BaseBdev3", 00:10:08.802 "aliases": [ 00:10:08.802 "b1c773c1-a5d0-4641-ad1c-47a81f5da823" 00:10:08.802 ], 00:10:08.802 "product_name": "Malloc disk", 00:10:08.802 "block_size": 512, 00:10:08.802 "num_blocks": 65536, 00:10:08.802 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:08.802 "assigned_rate_limits": { 00:10:08.802 "rw_ios_per_sec": 0, 00:10:08.802 "rw_mbytes_per_sec": 0, 00:10:08.802 "r_mbytes_per_sec": 0, 00:10:08.802 "w_mbytes_per_sec": 0 00:10:08.802 }, 00:10:08.802 "claimed": false, 00:10:08.802 "zoned": false, 00:10:08.802 "supported_io_types": { 00:10:08.802 "read": true, 00:10:08.802 "write": true, 00:10:08.802 "unmap": true, 00:10:08.802 "flush": true, 00:10:08.802 "reset": true, 00:10:08.802 "nvme_admin": false, 00:10:08.802 "nvme_io": false, 00:10:08.802 "nvme_io_md": false, 00:10:08.802 "write_zeroes": true, 00:10:08.802 "zcopy": true, 00:10:08.802 "get_zone_info": false, 00:10:08.802 "zone_management": false, 00:10:08.802 "zone_append": false, 00:10:08.802 "compare": false, 00:10:08.802 "compare_and_write": false, 00:10:08.802 "abort": true, 00:10:08.802 "seek_hole": false, 00:10:08.802 "seek_data": false, 00:10:08.802 "copy": true, 00:10:08.802 "nvme_iov_md": false 00:10:08.802 }, 00:10:08.802 "memory_domains": [ 00:10:08.802 { 00:10:08.802 "dma_device_id": "system", 00:10:08.802 "dma_device_type": 1 00:10:08.802 }, 00:10:08.802 { 00:10:08.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.802 "dma_device_type": 2 00:10:08.802 } 00:10:08.802 ], 00:10:08.802 "driver_specific": {} 00:10:08.802 } 00:10:08.802 ] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.802 [2024-11-18 13:26:38.819703] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.802 [2024-11-18 13:26:38.819834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.802 [2024-11-18 13:26:38.819872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.802 [2024-11-18 13:26:38.821596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.802 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.803 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.062 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.062 "name": "Existed_Raid", 00:10:09.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.062 "strip_size_kb": 0, 00:10:09.062 "state": "configuring", 00:10:09.062 "raid_level": "raid1", 00:10:09.062 "superblock": false, 00:10:09.062 "num_base_bdevs": 3, 00:10:09.062 "num_base_bdevs_discovered": 2, 00:10:09.062 "num_base_bdevs_operational": 3, 00:10:09.062 "base_bdevs_list": [ 00:10:09.062 { 00:10:09.062 "name": "BaseBdev1", 00:10:09.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.062 "is_configured": false, 00:10:09.062 "data_offset": 0, 00:10:09.062 "data_size": 0 00:10:09.062 }, 00:10:09.062 { 00:10:09.062 "name": "BaseBdev2", 00:10:09.062 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:09.062 "is_configured": true, 00:10:09.062 "data_offset": 0, 00:10:09.062 "data_size": 65536 00:10:09.062 }, 00:10:09.062 { 00:10:09.062 "name": "BaseBdev3", 00:10:09.062 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:09.062 "is_configured": true, 00:10:09.062 "data_offset": 0, 00:10:09.062 "data_size": 65536 00:10:09.062 } 00:10:09.062 ] 00:10:09.062 }' 00:10:09.062 13:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.062 13:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 [2024-11-18 13:26:39.286950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.323 "name": "Existed_Raid", 00:10:09.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.323 "strip_size_kb": 0, 00:10:09.323 "state": "configuring", 00:10:09.323 "raid_level": "raid1", 00:10:09.323 "superblock": false, 00:10:09.323 "num_base_bdevs": 3, 00:10:09.323 "num_base_bdevs_discovered": 1, 00:10:09.323 "num_base_bdevs_operational": 3, 00:10:09.323 "base_bdevs_list": [ 00:10:09.323 { 00:10:09.323 "name": "BaseBdev1", 00:10:09.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.323 "is_configured": false, 00:10:09.323 "data_offset": 0, 00:10:09.323 "data_size": 0 00:10:09.323 }, 00:10:09.323 { 00:10:09.323 "name": null, 00:10:09.323 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:09.323 "is_configured": false, 00:10:09.323 "data_offset": 0, 00:10:09.323 "data_size": 65536 00:10:09.323 }, 00:10:09.323 { 00:10:09.323 "name": "BaseBdev3", 00:10:09.323 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:09.323 "is_configured": true, 00:10:09.323 "data_offset": 0, 00:10:09.323 "data_size": 65536 00:10:09.323 } 00:10:09.323 ] 00:10:09.323 }' 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.323 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.891 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.892 [2024-11-18 13:26:39.802911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.892 BaseBdev1 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.892 [ 00:10:09.892 { 00:10:09.892 "name": "BaseBdev1", 00:10:09.892 "aliases": [ 00:10:09.892 "bdf99e22-1c96-44a2-a412-cd411ac5be9c" 00:10:09.892 ], 00:10:09.892 "product_name": "Malloc disk", 00:10:09.892 "block_size": 512, 00:10:09.892 "num_blocks": 65536, 00:10:09.892 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:09.892 "assigned_rate_limits": { 00:10:09.892 "rw_ios_per_sec": 0, 00:10:09.892 "rw_mbytes_per_sec": 0, 00:10:09.892 "r_mbytes_per_sec": 0, 00:10:09.892 "w_mbytes_per_sec": 0 00:10:09.892 }, 00:10:09.892 "claimed": true, 00:10:09.892 "claim_type": "exclusive_write", 00:10:09.892 "zoned": false, 00:10:09.892 "supported_io_types": { 00:10:09.892 "read": true, 00:10:09.892 "write": true, 00:10:09.892 "unmap": true, 00:10:09.892 "flush": true, 00:10:09.892 "reset": true, 00:10:09.892 "nvme_admin": false, 00:10:09.892 "nvme_io": false, 00:10:09.892 "nvme_io_md": false, 00:10:09.892 "write_zeroes": true, 00:10:09.892 "zcopy": true, 00:10:09.892 "get_zone_info": false, 00:10:09.892 "zone_management": false, 00:10:09.892 "zone_append": false, 00:10:09.892 "compare": false, 00:10:09.892 "compare_and_write": false, 00:10:09.892 "abort": true, 00:10:09.892 "seek_hole": false, 00:10:09.892 "seek_data": false, 00:10:09.892 "copy": true, 00:10:09.892 "nvme_iov_md": false 00:10:09.892 }, 00:10:09.892 "memory_domains": [ 00:10:09.892 { 00:10:09.892 "dma_device_id": "system", 00:10:09.892 "dma_device_type": 1 00:10:09.892 }, 00:10:09.892 { 00:10:09.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.892 "dma_device_type": 2 00:10:09.892 } 00:10:09.892 ], 00:10:09.892 "driver_specific": {} 00:10:09.892 } 00:10:09.892 ] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.892 "name": "Existed_Raid", 00:10:09.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.892 "strip_size_kb": 0, 00:10:09.892 "state": "configuring", 00:10:09.892 "raid_level": "raid1", 00:10:09.892 "superblock": false, 00:10:09.892 "num_base_bdevs": 3, 00:10:09.892 "num_base_bdevs_discovered": 2, 00:10:09.892 "num_base_bdevs_operational": 3, 00:10:09.892 "base_bdevs_list": [ 00:10:09.892 { 00:10:09.892 "name": "BaseBdev1", 00:10:09.892 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:09.892 "is_configured": true, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 }, 00:10:09.892 { 00:10:09.892 "name": null, 00:10:09.892 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:09.892 "is_configured": false, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 }, 00:10:09.892 { 00:10:09.892 "name": "BaseBdev3", 00:10:09.892 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:09.892 "is_configured": true, 00:10:09.892 "data_offset": 0, 00:10:09.892 "data_size": 65536 00:10:09.892 } 00:10:09.892 ] 00:10:09.892 }' 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.892 13:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 [2024-11-18 13:26:40.386005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.461 "name": "Existed_Raid", 00:10:10.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.461 "strip_size_kb": 0, 00:10:10.461 "state": "configuring", 00:10:10.461 "raid_level": "raid1", 00:10:10.461 "superblock": false, 00:10:10.461 "num_base_bdevs": 3, 00:10:10.461 "num_base_bdevs_discovered": 1, 00:10:10.461 "num_base_bdevs_operational": 3, 00:10:10.461 "base_bdevs_list": [ 00:10:10.461 { 00:10:10.461 "name": "BaseBdev1", 00:10:10.461 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:10.461 "is_configured": true, 00:10:10.461 "data_offset": 0, 00:10:10.461 "data_size": 65536 00:10:10.461 }, 00:10:10.461 { 00:10:10.461 "name": null, 00:10:10.461 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:10.461 "is_configured": false, 00:10:10.461 "data_offset": 0, 00:10:10.461 "data_size": 65536 00:10:10.461 }, 00:10:10.461 { 00:10:10.461 "name": null, 00:10:10.461 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:10.461 "is_configured": false, 00:10:10.461 "data_offset": 0, 00:10:10.461 "data_size": 65536 00:10:10.461 } 00:10:10.461 ] 00:10:10.461 }' 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.461 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.030 [2024-11-18 13:26:40.861286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.030 "name": "Existed_Raid", 00:10:11.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.030 "strip_size_kb": 0, 00:10:11.030 "state": "configuring", 00:10:11.030 "raid_level": "raid1", 00:10:11.030 "superblock": false, 00:10:11.030 "num_base_bdevs": 3, 00:10:11.030 "num_base_bdevs_discovered": 2, 00:10:11.030 "num_base_bdevs_operational": 3, 00:10:11.030 "base_bdevs_list": [ 00:10:11.030 { 00:10:11.030 "name": "BaseBdev1", 00:10:11.030 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:11.030 "is_configured": true, 00:10:11.030 "data_offset": 0, 00:10:11.030 "data_size": 65536 00:10:11.030 }, 00:10:11.030 { 00:10:11.030 "name": null, 00:10:11.030 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:11.030 "is_configured": false, 00:10:11.030 "data_offset": 0, 00:10:11.030 "data_size": 65536 00:10:11.030 }, 00:10:11.030 { 00:10:11.030 "name": "BaseBdev3", 00:10:11.030 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:11.030 "is_configured": true, 00:10:11.030 "data_offset": 0, 00:10:11.030 "data_size": 65536 00:10:11.030 } 00:10:11.030 ] 00:10:11.030 }' 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.030 13:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.289 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.289 [2024-11-18 13:26:41.328466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.549 "name": "Existed_Raid", 00:10:11.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.549 "strip_size_kb": 0, 00:10:11.549 "state": "configuring", 00:10:11.549 "raid_level": "raid1", 00:10:11.549 "superblock": false, 00:10:11.549 "num_base_bdevs": 3, 00:10:11.549 "num_base_bdevs_discovered": 1, 00:10:11.549 "num_base_bdevs_operational": 3, 00:10:11.549 "base_bdevs_list": [ 00:10:11.549 { 00:10:11.549 "name": null, 00:10:11.549 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:11.549 "is_configured": false, 00:10:11.549 "data_offset": 0, 00:10:11.549 "data_size": 65536 00:10:11.549 }, 00:10:11.549 { 00:10:11.549 "name": null, 00:10:11.549 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:11.549 "is_configured": false, 00:10:11.549 "data_offset": 0, 00:10:11.549 "data_size": 65536 00:10:11.549 }, 00:10:11.549 { 00:10:11.549 "name": "BaseBdev3", 00:10:11.549 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:11.549 "is_configured": true, 00:10:11.549 "data_offset": 0, 00:10:11.549 "data_size": 65536 00:10:11.549 } 00:10:11.549 ] 00:10:11.549 }' 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.549 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.118 [2024-11-18 13:26:41.955033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.118 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.119 13:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.119 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.119 "name": "Existed_Raid", 00:10:12.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.119 "strip_size_kb": 0, 00:10:12.119 "state": "configuring", 00:10:12.119 "raid_level": "raid1", 00:10:12.119 "superblock": false, 00:10:12.119 "num_base_bdevs": 3, 00:10:12.119 "num_base_bdevs_discovered": 2, 00:10:12.119 "num_base_bdevs_operational": 3, 00:10:12.119 "base_bdevs_list": [ 00:10:12.119 { 00:10:12.119 "name": null, 00:10:12.119 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:12.119 "is_configured": false, 00:10:12.119 "data_offset": 0, 00:10:12.119 "data_size": 65536 00:10:12.119 }, 00:10:12.119 { 00:10:12.119 "name": "BaseBdev2", 00:10:12.119 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:12.119 "is_configured": true, 00:10:12.119 "data_offset": 0, 00:10:12.119 "data_size": 65536 00:10:12.119 }, 00:10:12.119 { 00:10:12.119 "name": "BaseBdev3", 00:10:12.119 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:12.119 "is_configured": true, 00:10:12.119 "data_offset": 0, 00:10:12.119 "data_size": 65536 00:10:12.119 } 00:10:12.119 ] 00:10:12.119 }' 00:10:12.119 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.119 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.378 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.378 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.378 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.378 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.378 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bdf99e22-1c96-44a2-a412-cd411ac5be9c 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.638 [2024-11-18 13:26:42.531666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.638 [2024-11-18 13:26:42.531807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.638 [2024-11-18 13:26:42.531820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:12.638 [2024-11-18 13:26:42.532080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:12.638 [2024-11-18 13:26:42.532279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.638 [2024-11-18 13:26:42.532294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:12.638 [2024-11-18 13:26:42.532539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.638 NewBaseBdev 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.638 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.638 [ 00:10:12.638 { 00:10:12.638 "name": "NewBaseBdev", 00:10:12.638 "aliases": [ 00:10:12.638 "bdf99e22-1c96-44a2-a412-cd411ac5be9c" 00:10:12.638 ], 00:10:12.638 "product_name": "Malloc disk", 00:10:12.638 "block_size": 512, 00:10:12.638 "num_blocks": 65536, 00:10:12.638 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:12.638 "assigned_rate_limits": { 00:10:12.638 "rw_ios_per_sec": 0, 00:10:12.638 "rw_mbytes_per_sec": 0, 00:10:12.638 "r_mbytes_per_sec": 0, 00:10:12.638 "w_mbytes_per_sec": 0 00:10:12.638 }, 00:10:12.638 "claimed": true, 00:10:12.638 "claim_type": "exclusive_write", 00:10:12.639 "zoned": false, 00:10:12.639 "supported_io_types": { 00:10:12.639 "read": true, 00:10:12.639 "write": true, 00:10:12.639 "unmap": true, 00:10:12.639 "flush": true, 00:10:12.639 "reset": true, 00:10:12.639 "nvme_admin": false, 00:10:12.639 "nvme_io": false, 00:10:12.639 "nvme_io_md": false, 00:10:12.639 "write_zeroes": true, 00:10:12.639 "zcopy": true, 00:10:12.639 "get_zone_info": false, 00:10:12.639 "zone_management": false, 00:10:12.639 "zone_append": false, 00:10:12.639 "compare": false, 00:10:12.639 "compare_and_write": false, 00:10:12.639 "abort": true, 00:10:12.639 "seek_hole": false, 00:10:12.639 "seek_data": false, 00:10:12.639 "copy": true, 00:10:12.639 "nvme_iov_md": false 00:10:12.639 }, 00:10:12.639 "memory_domains": [ 00:10:12.639 { 00:10:12.639 "dma_device_id": "system", 00:10:12.639 "dma_device_type": 1 00:10:12.639 }, 00:10:12.639 { 00:10:12.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.639 "dma_device_type": 2 00:10:12.639 } 00:10:12.639 ], 00:10:12.639 "driver_specific": {} 00:10:12.639 } 00:10:12.639 ] 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.639 "name": "Existed_Raid", 00:10:12.639 "uuid": "b43fd8ca-0a20-41ff-a1dc-fe068e9ba54f", 00:10:12.639 "strip_size_kb": 0, 00:10:12.639 "state": "online", 00:10:12.639 "raid_level": "raid1", 00:10:12.639 "superblock": false, 00:10:12.639 "num_base_bdevs": 3, 00:10:12.639 "num_base_bdevs_discovered": 3, 00:10:12.639 "num_base_bdevs_operational": 3, 00:10:12.639 "base_bdevs_list": [ 00:10:12.639 { 00:10:12.639 "name": "NewBaseBdev", 00:10:12.639 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:12.639 "is_configured": true, 00:10:12.639 "data_offset": 0, 00:10:12.639 "data_size": 65536 00:10:12.639 }, 00:10:12.639 { 00:10:12.639 "name": "BaseBdev2", 00:10:12.639 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:12.639 "is_configured": true, 00:10:12.639 "data_offset": 0, 00:10:12.639 "data_size": 65536 00:10:12.639 }, 00:10:12.639 { 00:10:12.639 "name": "BaseBdev3", 00:10:12.639 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:12.639 "is_configured": true, 00:10:12.639 "data_offset": 0, 00:10:12.639 "data_size": 65536 00:10:12.639 } 00:10:12.639 ] 00:10:12.639 }' 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.639 13:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.208 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.208 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.209 [2024-11-18 13:26:43.047180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.209 "name": "Existed_Raid", 00:10:13.209 "aliases": [ 00:10:13.209 "b43fd8ca-0a20-41ff-a1dc-fe068e9ba54f" 00:10:13.209 ], 00:10:13.209 "product_name": "Raid Volume", 00:10:13.209 "block_size": 512, 00:10:13.209 "num_blocks": 65536, 00:10:13.209 "uuid": "b43fd8ca-0a20-41ff-a1dc-fe068e9ba54f", 00:10:13.209 "assigned_rate_limits": { 00:10:13.209 "rw_ios_per_sec": 0, 00:10:13.209 "rw_mbytes_per_sec": 0, 00:10:13.209 "r_mbytes_per_sec": 0, 00:10:13.209 "w_mbytes_per_sec": 0 00:10:13.209 }, 00:10:13.209 "claimed": false, 00:10:13.209 "zoned": false, 00:10:13.209 "supported_io_types": { 00:10:13.209 "read": true, 00:10:13.209 "write": true, 00:10:13.209 "unmap": false, 00:10:13.209 "flush": false, 00:10:13.209 "reset": true, 00:10:13.209 "nvme_admin": false, 00:10:13.209 "nvme_io": false, 00:10:13.209 "nvme_io_md": false, 00:10:13.209 "write_zeroes": true, 00:10:13.209 "zcopy": false, 00:10:13.209 "get_zone_info": false, 00:10:13.209 "zone_management": false, 00:10:13.209 "zone_append": false, 00:10:13.209 "compare": false, 00:10:13.209 "compare_and_write": false, 00:10:13.209 "abort": false, 00:10:13.209 "seek_hole": false, 00:10:13.209 "seek_data": false, 00:10:13.209 "copy": false, 00:10:13.209 "nvme_iov_md": false 00:10:13.209 }, 00:10:13.209 "memory_domains": [ 00:10:13.209 { 00:10:13.209 "dma_device_id": "system", 00:10:13.209 "dma_device_type": 1 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.209 "dma_device_type": 2 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "dma_device_id": "system", 00:10:13.209 "dma_device_type": 1 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.209 "dma_device_type": 2 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "dma_device_id": "system", 00:10:13.209 "dma_device_type": 1 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.209 "dma_device_type": 2 00:10:13.209 } 00:10:13.209 ], 00:10:13.209 "driver_specific": { 00:10:13.209 "raid": { 00:10:13.209 "uuid": "b43fd8ca-0a20-41ff-a1dc-fe068e9ba54f", 00:10:13.209 "strip_size_kb": 0, 00:10:13.209 "state": "online", 00:10:13.209 "raid_level": "raid1", 00:10:13.209 "superblock": false, 00:10:13.209 "num_base_bdevs": 3, 00:10:13.209 "num_base_bdevs_discovered": 3, 00:10:13.209 "num_base_bdevs_operational": 3, 00:10:13.209 "base_bdevs_list": [ 00:10:13.209 { 00:10:13.209 "name": "NewBaseBdev", 00:10:13.209 "uuid": "bdf99e22-1c96-44a2-a412-cd411ac5be9c", 00:10:13.209 "is_configured": true, 00:10:13.209 "data_offset": 0, 00:10:13.209 "data_size": 65536 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "name": "BaseBdev2", 00:10:13.209 "uuid": "989b41f7-a599-4e97-b318-0466c9e9760c", 00:10:13.209 "is_configured": true, 00:10:13.209 "data_offset": 0, 00:10:13.209 "data_size": 65536 00:10:13.209 }, 00:10:13.209 { 00:10:13.209 "name": "BaseBdev3", 00:10:13.209 "uuid": "b1c773c1-a5d0-4641-ad1c-47a81f5da823", 00:10:13.209 "is_configured": true, 00:10:13.209 "data_offset": 0, 00:10:13.209 "data_size": 65536 00:10:13.209 } 00:10:13.209 ] 00:10:13.209 } 00:10:13.209 } 00:10:13.209 }' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.209 BaseBdev2 00:10:13.209 BaseBdev3' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.209 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.469 [2024-11-18 13:26:43.310494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.469 [2024-11-18 13:26:43.310543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.469 [2024-11-18 13:26:43.310618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.469 [2024-11-18 13:26:43.310898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.469 [2024-11-18 13:26:43.310909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67413 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67413 ']' 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67413 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67413 00:10:13.469 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.470 killing process with pid 67413 00:10:13.470 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.470 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67413' 00:10:13.470 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67413 00:10:13.470 [2024-11-18 13:26:43.361599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.470 13:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67413 00:10:13.729 [2024-11-18 13:26:43.657013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.125 13:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.125 00:10:15.125 real 0m10.696s 00:10:15.125 user 0m16.994s 00:10:15.125 sys 0m1.971s 00:10:15.125 13:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.125 13:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.125 ************************************ 00:10:15.125 END TEST raid_state_function_test 00:10:15.126 ************************************ 00:10:15.126 13:26:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:15.126 13:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:15.126 13:26:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.126 13:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.126 ************************************ 00:10:15.126 START TEST raid_state_function_test_sb 00:10:15.126 ************************************ 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:15.126 Process raid pid: 68034 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68034 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68034' 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68034 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68034 ']' 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.126 13:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.126 [2024-11-18 13:26:44.915738] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:15.126 [2024-11-18 13:26:44.915955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.126 [2024-11-18 13:26:45.090007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.385 [2024-11-18 13:26:45.205667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.385 [2024-11-18 13:26:45.407561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.385 [2024-11-18 13:26:45.407681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.954 [2024-11-18 13:26:45.748118] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.954 [2024-11-18 13:26:45.748245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.954 [2024-11-18 13:26:45.748276] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.954 [2024-11-18 13:26:45.748301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.954 [2024-11-18 13:26:45.748319] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.954 [2024-11-18 13:26:45.748340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.954 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.955 "name": "Existed_Raid", 00:10:15.955 "uuid": "3e14e231-a92b-47a8-92e3-4bc4909a6743", 00:10:15.955 "strip_size_kb": 0, 00:10:15.955 "state": "configuring", 00:10:15.955 "raid_level": "raid1", 00:10:15.955 "superblock": true, 00:10:15.955 "num_base_bdevs": 3, 00:10:15.955 "num_base_bdevs_discovered": 0, 00:10:15.955 "num_base_bdevs_operational": 3, 00:10:15.955 "base_bdevs_list": [ 00:10:15.955 { 00:10:15.955 "name": "BaseBdev1", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.955 "is_configured": false, 00:10:15.955 "data_offset": 0, 00:10:15.955 "data_size": 0 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "name": "BaseBdev2", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.955 "is_configured": false, 00:10:15.955 "data_offset": 0, 00:10:15.955 "data_size": 0 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "name": "BaseBdev3", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.955 "is_configured": false, 00:10:15.955 "data_offset": 0, 00:10:15.955 "data_size": 0 00:10:15.955 } 00:10:15.955 ] 00:10:15.955 }' 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.955 13:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.214 [2024-11-18 13:26:46.227273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.214 [2024-11-18 13:26:46.227320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.214 [2024-11-18 13:26:46.239241] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.214 [2024-11-18 13:26:46.239287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.214 [2024-11-18 13:26:46.239296] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.214 [2024-11-18 13:26:46.239306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.214 [2024-11-18 13:26:46.239312] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.214 [2024-11-18 13:26:46.239321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.214 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.473 [2024-11-18 13:26:46.287403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.473 BaseBdev1 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.473 [ 00:10:16.473 { 00:10:16.473 "name": "BaseBdev1", 00:10:16.473 "aliases": [ 00:10:16.473 "02ee6fea-8b77-4933-8c28-e8a21a86ffa0" 00:10:16.473 ], 00:10:16.473 "product_name": "Malloc disk", 00:10:16.473 "block_size": 512, 00:10:16.473 "num_blocks": 65536, 00:10:16.473 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:16.473 "assigned_rate_limits": { 00:10:16.473 "rw_ios_per_sec": 0, 00:10:16.473 "rw_mbytes_per_sec": 0, 00:10:16.473 "r_mbytes_per_sec": 0, 00:10:16.473 "w_mbytes_per_sec": 0 00:10:16.473 }, 00:10:16.473 "claimed": true, 00:10:16.473 "claim_type": "exclusive_write", 00:10:16.473 "zoned": false, 00:10:16.473 "supported_io_types": { 00:10:16.473 "read": true, 00:10:16.473 "write": true, 00:10:16.473 "unmap": true, 00:10:16.473 "flush": true, 00:10:16.473 "reset": true, 00:10:16.473 "nvme_admin": false, 00:10:16.473 "nvme_io": false, 00:10:16.473 "nvme_io_md": false, 00:10:16.473 "write_zeroes": true, 00:10:16.473 "zcopy": true, 00:10:16.473 "get_zone_info": false, 00:10:16.473 "zone_management": false, 00:10:16.473 "zone_append": false, 00:10:16.473 "compare": false, 00:10:16.473 "compare_and_write": false, 00:10:16.473 "abort": true, 00:10:16.473 "seek_hole": false, 00:10:16.473 "seek_data": false, 00:10:16.473 "copy": true, 00:10:16.473 "nvme_iov_md": false 00:10:16.473 }, 00:10:16.473 "memory_domains": [ 00:10:16.473 { 00:10:16.473 "dma_device_id": "system", 00:10:16.473 "dma_device_type": 1 00:10:16.473 }, 00:10:16.473 { 00:10:16.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.473 "dma_device_type": 2 00:10:16.473 } 00:10:16.473 ], 00:10:16.473 "driver_specific": {} 00:10:16.473 } 00:10:16.473 ] 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.473 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.474 "name": "Existed_Raid", 00:10:16.474 "uuid": "40819b90-27d8-4307-8259-3347fd55a200", 00:10:16.474 "strip_size_kb": 0, 00:10:16.474 "state": "configuring", 00:10:16.474 "raid_level": "raid1", 00:10:16.474 "superblock": true, 00:10:16.474 "num_base_bdevs": 3, 00:10:16.474 "num_base_bdevs_discovered": 1, 00:10:16.474 "num_base_bdevs_operational": 3, 00:10:16.474 "base_bdevs_list": [ 00:10:16.474 { 00:10:16.474 "name": "BaseBdev1", 00:10:16.474 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:16.474 "is_configured": true, 00:10:16.474 "data_offset": 2048, 00:10:16.474 "data_size": 63488 00:10:16.474 }, 00:10:16.474 { 00:10:16.474 "name": "BaseBdev2", 00:10:16.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.474 "is_configured": false, 00:10:16.474 "data_offset": 0, 00:10:16.474 "data_size": 0 00:10:16.474 }, 00:10:16.474 { 00:10:16.474 "name": "BaseBdev3", 00:10:16.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.474 "is_configured": false, 00:10:16.474 "data_offset": 0, 00:10:16.474 "data_size": 0 00:10:16.474 } 00:10:16.474 ] 00:10:16.474 }' 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.474 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.042 [2024-11-18 13:26:46.822544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.042 [2024-11-18 13:26:46.822608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.042 [2024-11-18 13:26:46.834565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.042 [2024-11-18 13:26:46.836462] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:17.042 [2024-11-18 13:26:46.836550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.042 [2024-11-18 13:26:46.836581] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:17.042 [2024-11-18 13:26:46.836606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.042 "name": "Existed_Raid", 00:10:17.042 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:17.042 "strip_size_kb": 0, 00:10:17.042 "state": "configuring", 00:10:17.042 "raid_level": "raid1", 00:10:17.042 "superblock": true, 00:10:17.042 "num_base_bdevs": 3, 00:10:17.042 "num_base_bdevs_discovered": 1, 00:10:17.042 "num_base_bdevs_operational": 3, 00:10:17.042 "base_bdevs_list": [ 00:10:17.042 { 00:10:17.042 "name": "BaseBdev1", 00:10:17.042 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:17.042 "is_configured": true, 00:10:17.042 "data_offset": 2048, 00:10:17.042 "data_size": 63488 00:10:17.042 }, 00:10:17.042 { 00:10:17.042 "name": "BaseBdev2", 00:10:17.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.042 "is_configured": false, 00:10:17.042 "data_offset": 0, 00:10:17.042 "data_size": 0 00:10:17.042 }, 00:10:17.042 { 00:10:17.042 "name": "BaseBdev3", 00:10:17.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.042 "is_configured": false, 00:10:17.042 "data_offset": 0, 00:10:17.042 "data_size": 0 00:10:17.042 } 00:10:17.042 ] 00:10:17.042 }' 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.042 13:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.302 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.302 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.302 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 [2024-11-18 13:26:47.360060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.561 BaseBdev2 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.561 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.561 [ 00:10:17.561 { 00:10:17.561 "name": "BaseBdev2", 00:10:17.561 "aliases": [ 00:10:17.561 "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7" 00:10:17.561 ], 00:10:17.561 "product_name": "Malloc disk", 00:10:17.561 "block_size": 512, 00:10:17.561 "num_blocks": 65536, 00:10:17.561 "uuid": "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7", 00:10:17.561 "assigned_rate_limits": { 00:10:17.561 "rw_ios_per_sec": 0, 00:10:17.561 "rw_mbytes_per_sec": 0, 00:10:17.561 "r_mbytes_per_sec": 0, 00:10:17.561 "w_mbytes_per_sec": 0 00:10:17.561 }, 00:10:17.561 "claimed": true, 00:10:17.561 "claim_type": "exclusive_write", 00:10:17.561 "zoned": false, 00:10:17.561 "supported_io_types": { 00:10:17.561 "read": true, 00:10:17.561 "write": true, 00:10:17.561 "unmap": true, 00:10:17.561 "flush": true, 00:10:17.561 "reset": true, 00:10:17.561 "nvme_admin": false, 00:10:17.561 "nvme_io": false, 00:10:17.561 "nvme_io_md": false, 00:10:17.561 "write_zeroes": true, 00:10:17.561 "zcopy": true, 00:10:17.561 "get_zone_info": false, 00:10:17.561 "zone_management": false, 00:10:17.561 "zone_append": false, 00:10:17.561 "compare": false, 00:10:17.561 "compare_and_write": false, 00:10:17.561 "abort": true, 00:10:17.561 "seek_hole": false, 00:10:17.561 "seek_data": false, 00:10:17.561 "copy": true, 00:10:17.561 "nvme_iov_md": false 00:10:17.561 }, 00:10:17.561 "memory_domains": [ 00:10:17.561 { 00:10:17.561 "dma_device_id": "system", 00:10:17.561 "dma_device_type": 1 00:10:17.561 }, 00:10:17.561 { 00:10:17.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.561 "dma_device_type": 2 00:10:17.561 } 00:10:17.561 ], 00:10:17.562 "driver_specific": {} 00:10:17.562 } 00:10:17.562 ] 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.562 "name": "Existed_Raid", 00:10:17.562 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:17.562 "strip_size_kb": 0, 00:10:17.562 "state": "configuring", 00:10:17.562 "raid_level": "raid1", 00:10:17.562 "superblock": true, 00:10:17.562 "num_base_bdevs": 3, 00:10:17.562 "num_base_bdevs_discovered": 2, 00:10:17.562 "num_base_bdevs_operational": 3, 00:10:17.562 "base_bdevs_list": [ 00:10:17.562 { 00:10:17.562 "name": "BaseBdev1", 00:10:17.562 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:17.562 "is_configured": true, 00:10:17.562 "data_offset": 2048, 00:10:17.562 "data_size": 63488 00:10:17.562 }, 00:10:17.562 { 00:10:17.562 "name": "BaseBdev2", 00:10:17.562 "uuid": "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7", 00:10:17.562 "is_configured": true, 00:10:17.562 "data_offset": 2048, 00:10:17.562 "data_size": 63488 00:10:17.562 }, 00:10:17.562 { 00:10:17.562 "name": "BaseBdev3", 00:10:17.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.562 "is_configured": false, 00:10:17.562 "data_offset": 0, 00:10:17.562 "data_size": 0 00:10:17.562 } 00:10:17.562 ] 00:10:17.562 }' 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.562 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 [2024-11-18 13:26:47.921380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.131 BaseBdev3 00:10:18.131 [2024-11-18 13:26:47.921719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.131 [2024-11-18 13:26:47.921743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.131 [2024-11-18 13:26:47.922177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:18.131 [2024-11-18 13:26:47.922338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.131 [2024-11-18 13:26:47.922348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:18.131 [2024-11-18 13:26:47.922524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 [ 00:10:18.131 { 00:10:18.131 "name": "BaseBdev3", 00:10:18.131 "aliases": [ 00:10:18.131 "a81a06f2-5e57-4d04-803d-7e99063d1307" 00:10:18.131 ], 00:10:18.131 "product_name": "Malloc disk", 00:10:18.131 "block_size": 512, 00:10:18.131 "num_blocks": 65536, 00:10:18.131 "uuid": "a81a06f2-5e57-4d04-803d-7e99063d1307", 00:10:18.131 "assigned_rate_limits": { 00:10:18.131 "rw_ios_per_sec": 0, 00:10:18.131 "rw_mbytes_per_sec": 0, 00:10:18.131 "r_mbytes_per_sec": 0, 00:10:18.131 "w_mbytes_per_sec": 0 00:10:18.131 }, 00:10:18.131 "claimed": true, 00:10:18.131 "claim_type": "exclusive_write", 00:10:18.131 "zoned": false, 00:10:18.131 "supported_io_types": { 00:10:18.131 "read": true, 00:10:18.131 "write": true, 00:10:18.131 "unmap": true, 00:10:18.131 "flush": true, 00:10:18.131 "reset": true, 00:10:18.131 "nvme_admin": false, 00:10:18.131 "nvme_io": false, 00:10:18.131 "nvme_io_md": false, 00:10:18.131 "write_zeroes": true, 00:10:18.131 "zcopy": true, 00:10:18.131 "get_zone_info": false, 00:10:18.131 "zone_management": false, 00:10:18.131 "zone_append": false, 00:10:18.131 "compare": false, 00:10:18.131 "compare_and_write": false, 00:10:18.131 "abort": true, 00:10:18.131 "seek_hole": false, 00:10:18.131 "seek_data": false, 00:10:18.131 "copy": true, 00:10:18.131 "nvme_iov_md": false 00:10:18.131 }, 00:10:18.131 "memory_domains": [ 00:10:18.131 { 00:10:18.131 "dma_device_id": "system", 00:10:18.131 "dma_device_type": 1 00:10:18.131 }, 00:10:18.131 { 00:10:18.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.131 "dma_device_type": 2 00:10:18.131 } 00:10:18.131 ], 00:10:18.131 "driver_specific": {} 00:10:18.131 } 00:10:18.131 ] 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.131 13:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.131 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.131 "name": "Existed_Raid", 00:10:18.131 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:18.131 "strip_size_kb": 0, 00:10:18.131 "state": "online", 00:10:18.131 "raid_level": "raid1", 00:10:18.131 "superblock": true, 00:10:18.131 "num_base_bdevs": 3, 00:10:18.131 "num_base_bdevs_discovered": 3, 00:10:18.131 "num_base_bdevs_operational": 3, 00:10:18.131 "base_bdevs_list": [ 00:10:18.131 { 00:10:18.131 "name": "BaseBdev1", 00:10:18.131 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:18.131 "is_configured": true, 00:10:18.131 "data_offset": 2048, 00:10:18.131 "data_size": 63488 00:10:18.131 }, 00:10:18.131 { 00:10:18.131 "name": "BaseBdev2", 00:10:18.131 "uuid": "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7", 00:10:18.131 "is_configured": true, 00:10:18.131 "data_offset": 2048, 00:10:18.131 "data_size": 63488 00:10:18.131 }, 00:10:18.131 { 00:10:18.131 "name": "BaseBdev3", 00:10:18.131 "uuid": "a81a06f2-5e57-4d04-803d-7e99063d1307", 00:10:18.131 "is_configured": true, 00:10:18.131 "data_offset": 2048, 00:10:18.131 "data_size": 63488 00:10:18.131 } 00:10:18.131 ] 00:10:18.131 }' 00:10:18.131 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.131 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.391 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.392 [2024-11-18 13:26:48.381018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.392 "name": "Existed_Raid", 00:10:18.392 "aliases": [ 00:10:18.392 "195f11e2-6abb-4440-8cbf-3be93648592a" 00:10:18.392 ], 00:10:18.392 "product_name": "Raid Volume", 00:10:18.392 "block_size": 512, 00:10:18.392 "num_blocks": 63488, 00:10:18.392 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:18.392 "assigned_rate_limits": { 00:10:18.392 "rw_ios_per_sec": 0, 00:10:18.392 "rw_mbytes_per_sec": 0, 00:10:18.392 "r_mbytes_per_sec": 0, 00:10:18.392 "w_mbytes_per_sec": 0 00:10:18.392 }, 00:10:18.392 "claimed": false, 00:10:18.392 "zoned": false, 00:10:18.392 "supported_io_types": { 00:10:18.392 "read": true, 00:10:18.392 "write": true, 00:10:18.392 "unmap": false, 00:10:18.392 "flush": false, 00:10:18.392 "reset": true, 00:10:18.392 "nvme_admin": false, 00:10:18.392 "nvme_io": false, 00:10:18.392 "nvme_io_md": false, 00:10:18.392 "write_zeroes": true, 00:10:18.392 "zcopy": false, 00:10:18.392 "get_zone_info": false, 00:10:18.392 "zone_management": false, 00:10:18.392 "zone_append": false, 00:10:18.392 "compare": false, 00:10:18.392 "compare_and_write": false, 00:10:18.392 "abort": false, 00:10:18.392 "seek_hole": false, 00:10:18.392 "seek_data": false, 00:10:18.392 "copy": false, 00:10:18.392 "nvme_iov_md": false 00:10:18.392 }, 00:10:18.392 "memory_domains": [ 00:10:18.392 { 00:10:18.392 "dma_device_id": "system", 00:10:18.392 "dma_device_type": 1 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.392 "dma_device_type": 2 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "dma_device_id": "system", 00:10:18.392 "dma_device_type": 1 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.392 "dma_device_type": 2 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "dma_device_id": "system", 00:10:18.392 "dma_device_type": 1 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.392 "dma_device_type": 2 00:10:18.392 } 00:10:18.392 ], 00:10:18.392 "driver_specific": { 00:10:18.392 "raid": { 00:10:18.392 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:18.392 "strip_size_kb": 0, 00:10:18.392 "state": "online", 00:10:18.392 "raid_level": "raid1", 00:10:18.392 "superblock": true, 00:10:18.392 "num_base_bdevs": 3, 00:10:18.392 "num_base_bdevs_discovered": 3, 00:10:18.392 "num_base_bdevs_operational": 3, 00:10:18.392 "base_bdevs_list": [ 00:10:18.392 { 00:10:18.392 "name": "BaseBdev1", 00:10:18.392 "uuid": "02ee6fea-8b77-4933-8c28-e8a21a86ffa0", 00:10:18.392 "is_configured": true, 00:10:18.392 "data_offset": 2048, 00:10:18.392 "data_size": 63488 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "name": "BaseBdev2", 00:10:18.392 "uuid": "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7", 00:10:18.392 "is_configured": true, 00:10:18.392 "data_offset": 2048, 00:10:18.392 "data_size": 63488 00:10:18.392 }, 00:10:18.392 { 00:10:18.392 "name": "BaseBdev3", 00:10:18.392 "uuid": "a81a06f2-5e57-4d04-803d-7e99063d1307", 00:10:18.392 "is_configured": true, 00:10:18.392 "data_offset": 2048, 00:10:18.392 "data_size": 63488 00:10:18.392 } 00:10:18.392 ] 00:10:18.392 } 00:10:18.392 } 00:10:18.392 }' 00:10:18.392 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:18.651 BaseBdev2 00:10:18.651 BaseBdev3' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.651 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.651 [2024-11-18 13:26:48.652278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.910 "name": "Existed_Raid", 00:10:18.910 "uuid": "195f11e2-6abb-4440-8cbf-3be93648592a", 00:10:18.910 "strip_size_kb": 0, 00:10:18.910 "state": "online", 00:10:18.910 "raid_level": "raid1", 00:10:18.910 "superblock": true, 00:10:18.910 "num_base_bdevs": 3, 00:10:18.910 "num_base_bdevs_discovered": 2, 00:10:18.910 "num_base_bdevs_operational": 2, 00:10:18.910 "base_bdevs_list": [ 00:10:18.910 { 00:10:18.910 "name": null, 00:10:18.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.910 "is_configured": false, 00:10:18.910 "data_offset": 0, 00:10:18.910 "data_size": 63488 00:10:18.910 }, 00:10:18.910 { 00:10:18.910 "name": "BaseBdev2", 00:10:18.910 "uuid": "0f822bb4-45a7-4569-adc7-4f4dd7f8b1e7", 00:10:18.910 "is_configured": true, 00:10:18.910 "data_offset": 2048, 00:10:18.910 "data_size": 63488 00:10:18.910 }, 00:10:18.910 { 00:10:18.910 "name": "BaseBdev3", 00:10:18.910 "uuid": "a81a06f2-5e57-4d04-803d-7e99063d1307", 00:10:18.910 "is_configured": true, 00:10:18.910 "data_offset": 2048, 00:10:18.910 "data_size": 63488 00:10:18.910 } 00:10:18.910 ] 00:10:18.910 }' 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.910 13:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.169 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.169 [2024-11-18 13:26:49.184739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.428 [2024-11-18 13:26:49.338509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.428 [2024-11-18 13:26:49.338733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.428 [2024-11-18 13:26:49.437575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.428 [2024-11-18 13:26:49.437715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.428 [2024-11-18 13:26:49.437757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.428 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 BaseBdev2 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 [ 00:10:19.688 { 00:10:19.688 "name": "BaseBdev2", 00:10:19.688 "aliases": [ 00:10:19.688 "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3" 00:10:19.688 ], 00:10:19.688 "product_name": "Malloc disk", 00:10:19.688 "block_size": 512, 00:10:19.688 "num_blocks": 65536, 00:10:19.688 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:19.688 "assigned_rate_limits": { 00:10:19.688 "rw_ios_per_sec": 0, 00:10:19.688 "rw_mbytes_per_sec": 0, 00:10:19.688 "r_mbytes_per_sec": 0, 00:10:19.688 "w_mbytes_per_sec": 0 00:10:19.688 }, 00:10:19.688 "claimed": false, 00:10:19.688 "zoned": false, 00:10:19.688 "supported_io_types": { 00:10:19.688 "read": true, 00:10:19.688 "write": true, 00:10:19.688 "unmap": true, 00:10:19.688 "flush": true, 00:10:19.688 "reset": true, 00:10:19.688 "nvme_admin": false, 00:10:19.688 "nvme_io": false, 00:10:19.688 "nvme_io_md": false, 00:10:19.688 "write_zeroes": true, 00:10:19.688 "zcopy": true, 00:10:19.688 "get_zone_info": false, 00:10:19.688 "zone_management": false, 00:10:19.688 "zone_append": false, 00:10:19.688 "compare": false, 00:10:19.688 "compare_and_write": false, 00:10:19.688 "abort": true, 00:10:19.688 "seek_hole": false, 00:10:19.688 "seek_data": false, 00:10:19.688 "copy": true, 00:10:19.688 "nvme_iov_md": false 00:10:19.688 }, 00:10:19.688 "memory_domains": [ 00:10:19.688 { 00:10:19.688 "dma_device_id": "system", 00:10:19.688 "dma_device_type": 1 00:10:19.688 }, 00:10:19.688 { 00:10:19.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.688 "dma_device_type": 2 00:10:19.688 } 00:10:19.688 ], 00:10:19.688 "driver_specific": {} 00:10:19.688 } 00:10:19.688 ] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 BaseBdev3 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.688 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.688 [ 00:10:19.688 { 00:10:19.688 "name": "BaseBdev3", 00:10:19.688 "aliases": [ 00:10:19.688 "76733bf8-d3d3-4464-ac17-884dafa13583" 00:10:19.688 ], 00:10:19.688 "product_name": "Malloc disk", 00:10:19.688 "block_size": 512, 00:10:19.688 "num_blocks": 65536, 00:10:19.688 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:19.688 "assigned_rate_limits": { 00:10:19.688 "rw_ios_per_sec": 0, 00:10:19.688 "rw_mbytes_per_sec": 0, 00:10:19.688 "r_mbytes_per_sec": 0, 00:10:19.688 "w_mbytes_per_sec": 0 00:10:19.688 }, 00:10:19.688 "claimed": false, 00:10:19.688 "zoned": false, 00:10:19.689 "supported_io_types": { 00:10:19.689 "read": true, 00:10:19.689 "write": true, 00:10:19.689 "unmap": true, 00:10:19.689 "flush": true, 00:10:19.689 "reset": true, 00:10:19.689 "nvme_admin": false, 00:10:19.689 "nvme_io": false, 00:10:19.689 "nvme_io_md": false, 00:10:19.689 "write_zeroes": true, 00:10:19.689 "zcopy": true, 00:10:19.689 "get_zone_info": false, 00:10:19.689 "zone_management": false, 00:10:19.689 "zone_append": false, 00:10:19.689 "compare": false, 00:10:19.689 "compare_and_write": false, 00:10:19.689 "abort": true, 00:10:19.689 "seek_hole": false, 00:10:19.689 "seek_data": false, 00:10:19.689 "copy": true, 00:10:19.689 "nvme_iov_md": false 00:10:19.689 }, 00:10:19.689 "memory_domains": [ 00:10:19.689 { 00:10:19.689 "dma_device_id": "system", 00:10:19.689 "dma_device_type": 1 00:10:19.689 }, 00:10:19.689 { 00:10:19.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.689 "dma_device_type": 2 00:10:19.689 } 00:10:19.689 ], 00:10:19.689 "driver_specific": {} 00:10:19.689 } 00:10:19.689 ] 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.689 [2024-11-18 13:26:49.656334] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.689 [2024-11-18 13:26:49.656475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.689 [2024-11-18 13:26:49.656517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.689 [2024-11-18 13:26:49.658307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.689 "name": "Existed_Raid", 00:10:19.689 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:19.689 "strip_size_kb": 0, 00:10:19.689 "state": "configuring", 00:10:19.689 "raid_level": "raid1", 00:10:19.689 "superblock": true, 00:10:19.689 "num_base_bdevs": 3, 00:10:19.689 "num_base_bdevs_discovered": 2, 00:10:19.689 "num_base_bdevs_operational": 3, 00:10:19.689 "base_bdevs_list": [ 00:10:19.689 { 00:10:19.689 "name": "BaseBdev1", 00:10:19.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.689 "is_configured": false, 00:10:19.689 "data_offset": 0, 00:10:19.689 "data_size": 0 00:10:19.689 }, 00:10:19.689 { 00:10:19.689 "name": "BaseBdev2", 00:10:19.689 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:19.689 "is_configured": true, 00:10:19.689 "data_offset": 2048, 00:10:19.689 "data_size": 63488 00:10:19.689 }, 00:10:19.689 { 00:10:19.689 "name": "BaseBdev3", 00:10:19.689 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:19.689 "is_configured": true, 00:10:19.689 "data_offset": 2048, 00:10:19.689 "data_size": 63488 00:10:19.689 } 00:10:19.689 ] 00:10:19.689 }' 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.689 13:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 [2024-11-18 13:26:50.087694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.257 "name": "Existed_Raid", 00:10:20.257 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:20.257 "strip_size_kb": 0, 00:10:20.257 "state": "configuring", 00:10:20.257 "raid_level": "raid1", 00:10:20.257 "superblock": true, 00:10:20.257 "num_base_bdevs": 3, 00:10:20.257 "num_base_bdevs_discovered": 1, 00:10:20.257 "num_base_bdevs_operational": 3, 00:10:20.257 "base_bdevs_list": [ 00:10:20.257 { 00:10:20.257 "name": "BaseBdev1", 00:10:20.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.257 "is_configured": false, 00:10:20.257 "data_offset": 0, 00:10:20.257 "data_size": 0 00:10:20.257 }, 00:10:20.257 { 00:10:20.257 "name": null, 00:10:20.257 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:20.257 "is_configured": false, 00:10:20.257 "data_offset": 0, 00:10:20.257 "data_size": 63488 00:10:20.257 }, 00:10:20.257 { 00:10:20.257 "name": "BaseBdev3", 00:10:20.257 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:20.257 "is_configured": true, 00:10:20.257 "data_offset": 2048, 00:10:20.257 "data_size": 63488 00:10:20.257 } 00:10:20.257 ] 00:10:20.257 }' 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.257 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.516 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.775 [2024-11-18 13:26:50.601303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.775 BaseBdev1 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.775 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.776 [ 00:10:20.776 { 00:10:20.776 "name": "BaseBdev1", 00:10:20.776 "aliases": [ 00:10:20.776 "d9e8a887-e4bf-455f-99a7-d4c28b083fcb" 00:10:20.776 ], 00:10:20.776 "product_name": "Malloc disk", 00:10:20.776 "block_size": 512, 00:10:20.776 "num_blocks": 65536, 00:10:20.776 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:20.776 "assigned_rate_limits": { 00:10:20.776 "rw_ios_per_sec": 0, 00:10:20.776 "rw_mbytes_per_sec": 0, 00:10:20.776 "r_mbytes_per_sec": 0, 00:10:20.776 "w_mbytes_per_sec": 0 00:10:20.776 }, 00:10:20.776 "claimed": true, 00:10:20.776 "claim_type": "exclusive_write", 00:10:20.776 "zoned": false, 00:10:20.776 "supported_io_types": { 00:10:20.776 "read": true, 00:10:20.776 "write": true, 00:10:20.776 "unmap": true, 00:10:20.776 "flush": true, 00:10:20.776 "reset": true, 00:10:20.776 "nvme_admin": false, 00:10:20.776 "nvme_io": false, 00:10:20.776 "nvme_io_md": false, 00:10:20.776 "write_zeroes": true, 00:10:20.776 "zcopy": true, 00:10:20.776 "get_zone_info": false, 00:10:20.776 "zone_management": false, 00:10:20.776 "zone_append": false, 00:10:20.776 "compare": false, 00:10:20.776 "compare_and_write": false, 00:10:20.776 "abort": true, 00:10:20.776 "seek_hole": false, 00:10:20.776 "seek_data": false, 00:10:20.776 "copy": true, 00:10:20.776 "nvme_iov_md": false 00:10:20.776 }, 00:10:20.776 "memory_domains": [ 00:10:20.776 { 00:10:20.776 "dma_device_id": "system", 00:10:20.776 "dma_device_type": 1 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.776 "dma_device_type": 2 00:10:20.776 } 00:10:20.776 ], 00:10:20.776 "driver_specific": {} 00:10:20.776 } 00:10:20.776 ] 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.776 "name": "Existed_Raid", 00:10:20.776 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:20.776 "strip_size_kb": 0, 00:10:20.776 "state": "configuring", 00:10:20.776 "raid_level": "raid1", 00:10:20.776 "superblock": true, 00:10:20.776 "num_base_bdevs": 3, 00:10:20.776 "num_base_bdevs_discovered": 2, 00:10:20.776 "num_base_bdevs_operational": 3, 00:10:20.776 "base_bdevs_list": [ 00:10:20.776 { 00:10:20.776 "name": "BaseBdev1", 00:10:20.776 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:20.776 "is_configured": true, 00:10:20.776 "data_offset": 2048, 00:10:20.776 "data_size": 63488 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "name": null, 00:10:20.776 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:20.776 "is_configured": false, 00:10:20.776 "data_offset": 0, 00:10:20.776 "data_size": 63488 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "name": "BaseBdev3", 00:10:20.776 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:20.776 "is_configured": true, 00:10:20.776 "data_offset": 2048, 00:10:20.776 "data_size": 63488 00:10:20.776 } 00:10:20.776 ] 00:10:20.776 }' 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.776 13:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.346 [2024-11-18 13:26:51.148387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.346 "name": "Existed_Raid", 00:10:21.346 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:21.346 "strip_size_kb": 0, 00:10:21.346 "state": "configuring", 00:10:21.346 "raid_level": "raid1", 00:10:21.346 "superblock": true, 00:10:21.346 "num_base_bdevs": 3, 00:10:21.346 "num_base_bdevs_discovered": 1, 00:10:21.346 "num_base_bdevs_operational": 3, 00:10:21.346 "base_bdevs_list": [ 00:10:21.346 { 00:10:21.346 "name": "BaseBdev1", 00:10:21.346 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:21.346 "is_configured": true, 00:10:21.346 "data_offset": 2048, 00:10:21.346 "data_size": 63488 00:10:21.346 }, 00:10:21.346 { 00:10:21.346 "name": null, 00:10:21.346 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:21.346 "is_configured": false, 00:10:21.346 "data_offset": 0, 00:10:21.346 "data_size": 63488 00:10:21.346 }, 00:10:21.346 { 00:10:21.346 "name": null, 00:10:21.346 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:21.346 "is_configured": false, 00:10:21.346 "data_offset": 0, 00:10:21.346 "data_size": 63488 00:10:21.346 } 00:10:21.346 ] 00:10:21.346 }' 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.346 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.606 [2024-11-18 13:26:51.623622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.606 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.865 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.865 "name": "Existed_Raid", 00:10:21.865 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:21.865 "strip_size_kb": 0, 00:10:21.865 "state": "configuring", 00:10:21.865 "raid_level": "raid1", 00:10:21.865 "superblock": true, 00:10:21.865 "num_base_bdevs": 3, 00:10:21.865 "num_base_bdevs_discovered": 2, 00:10:21.865 "num_base_bdevs_operational": 3, 00:10:21.865 "base_bdevs_list": [ 00:10:21.865 { 00:10:21.865 "name": "BaseBdev1", 00:10:21.865 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:21.865 "is_configured": true, 00:10:21.865 "data_offset": 2048, 00:10:21.865 "data_size": 63488 00:10:21.865 }, 00:10:21.865 { 00:10:21.865 "name": null, 00:10:21.865 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:21.865 "is_configured": false, 00:10:21.865 "data_offset": 0, 00:10:21.865 "data_size": 63488 00:10:21.865 }, 00:10:21.865 { 00:10:21.865 "name": "BaseBdev3", 00:10:21.865 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:21.865 "is_configured": true, 00:10:21.865 "data_offset": 2048, 00:10:21.865 "data_size": 63488 00:10:21.865 } 00:10:21.865 ] 00:10:21.865 }' 00:10:21.866 13:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.866 13:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.124 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 [2024-11-18 13:26:52.114818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.383 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.383 "name": "Existed_Raid", 00:10:22.383 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:22.383 "strip_size_kb": 0, 00:10:22.383 "state": "configuring", 00:10:22.383 "raid_level": "raid1", 00:10:22.383 "superblock": true, 00:10:22.383 "num_base_bdevs": 3, 00:10:22.383 "num_base_bdevs_discovered": 1, 00:10:22.383 "num_base_bdevs_operational": 3, 00:10:22.383 "base_bdevs_list": [ 00:10:22.383 { 00:10:22.383 "name": null, 00:10:22.383 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:22.383 "is_configured": false, 00:10:22.383 "data_offset": 0, 00:10:22.383 "data_size": 63488 00:10:22.383 }, 00:10:22.383 { 00:10:22.383 "name": null, 00:10:22.383 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:22.383 "is_configured": false, 00:10:22.383 "data_offset": 0, 00:10:22.383 "data_size": 63488 00:10:22.383 }, 00:10:22.384 { 00:10:22.384 "name": "BaseBdev3", 00:10:22.384 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:22.384 "is_configured": true, 00:10:22.384 "data_offset": 2048, 00:10:22.384 "data_size": 63488 00:10:22.384 } 00:10:22.384 ] 00:10:22.384 }' 00:10:22.384 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.384 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.950 [2024-11-18 13:26:52.747067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.950 "name": "Existed_Raid", 00:10:22.950 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:22.950 "strip_size_kb": 0, 00:10:22.950 "state": "configuring", 00:10:22.950 "raid_level": "raid1", 00:10:22.950 "superblock": true, 00:10:22.950 "num_base_bdevs": 3, 00:10:22.950 "num_base_bdevs_discovered": 2, 00:10:22.950 "num_base_bdevs_operational": 3, 00:10:22.950 "base_bdevs_list": [ 00:10:22.950 { 00:10:22.950 "name": null, 00:10:22.950 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:22.950 "is_configured": false, 00:10:22.950 "data_offset": 0, 00:10:22.950 "data_size": 63488 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "name": "BaseBdev2", 00:10:22.950 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:22.950 "is_configured": true, 00:10:22.950 "data_offset": 2048, 00:10:22.950 "data_size": 63488 00:10:22.950 }, 00:10:22.950 { 00:10:22.950 "name": "BaseBdev3", 00:10:22.950 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:22.950 "is_configured": true, 00:10:22.950 "data_offset": 2048, 00:10:22.950 "data_size": 63488 00:10:22.950 } 00:10:22.950 ] 00:10:22.950 }' 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.950 13:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d9e8a887-e4bf-455f-99a7-d4c28b083fcb 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.209 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.468 [2024-11-18 13:26:53.286610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:23.468 [2024-11-18 13:26:53.286861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:23.468 [2024-11-18 13:26:53.286874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.468 [2024-11-18 13:26:53.287125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:23.468 [2024-11-18 13:26:53.287315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:23.468 [2024-11-18 13:26:53.287329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:23.468 NewBaseBdev 00:10:23.468 [2024-11-18 13:26:53.287456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.468 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.469 [ 00:10:23.469 { 00:10:23.469 "name": "NewBaseBdev", 00:10:23.469 "aliases": [ 00:10:23.469 "d9e8a887-e4bf-455f-99a7-d4c28b083fcb" 00:10:23.469 ], 00:10:23.469 "product_name": "Malloc disk", 00:10:23.469 "block_size": 512, 00:10:23.469 "num_blocks": 65536, 00:10:23.469 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:23.469 "assigned_rate_limits": { 00:10:23.469 "rw_ios_per_sec": 0, 00:10:23.469 "rw_mbytes_per_sec": 0, 00:10:23.469 "r_mbytes_per_sec": 0, 00:10:23.469 "w_mbytes_per_sec": 0 00:10:23.469 }, 00:10:23.469 "claimed": true, 00:10:23.469 "claim_type": "exclusive_write", 00:10:23.469 "zoned": false, 00:10:23.469 "supported_io_types": { 00:10:23.469 "read": true, 00:10:23.469 "write": true, 00:10:23.469 "unmap": true, 00:10:23.469 "flush": true, 00:10:23.469 "reset": true, 00:10:23.469 "nvme_admin": false, 00:10:23.469 "nvme_io": false, 00:10:23.469 "nvme_io_md": false, 00:10:23.469 "write_zeroes": true, 00:10:23.469 "zcopy": true, 00:10:23.469 "get_zone_info": false, 00:10:23.469 "zone_management": false, 00:10:23.469 "zone_append": false, 00:10:23.469 "compare": false, 00:10:23.469 "compare_and_write": false, 00:10:23.469 "abort": true, 00:10:23.469 "seek_hole": false, 00:10:23.469 "seek_data": false, 00:10:23.469 "copy": true, 00:10:23.469 "nvme_iov_md": false 00:10:23.469 }, 00:10:23.469 "memory_domains": [ 00:10:23.469 { 00:10:23.469 "dma_device_id": "system", 00:10:23.469 "dma_device_type": 1 00:10:23.469 }, 00:10:23.469 { 00:10:23.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.469 "dma_device_type": 2 00:10:23.469 } 00:10:23.469 ], 00:10:23.469 "driver_specific": {} 00:10:23.469 } 00:10:23.469 ] 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.469 "name": "Existed_Raid", 00:10:23.469 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:23.469 "strip_size_kb": 0, 00:10:23.469 "state": "online", 00:10:23.469 "raid_level": "raid1", 00:10:23.469 "superblock": true, 00:10:23.469 "num_base_bdevs": 3, 00:10:23.469 "num_base_bdevs_discovered": 3, 00:10:23.469 "num_base_bdevs_operational": 3, 00:10:23.469 "base_bdevs_list": [ 00:10:23.469 { 00:10:23.469 "name": "NewBaseBdev", 00:10:23.469 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:23.469 "is_configured": true, 00:10:23.469 "data_offset": 2048, 00:10:23.469 "data_size": 63488 00:10:23.469 }, 00:10:23.469 { 00:10:23.469 "name": "BaseBdev2", 00:10:23.469 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:23.469 "is_configured": true, 00:10:23.469 "data_offset": 2048, 00:10:23.469 "data_size": 63488 00:10:23.469 }, 00:10:23.469 { 00:10:23.469 "name": "BaseBdev3", 00:10:23.469 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:23.469 "is_configured": true, 00:10:23.469 "data_offset": 2048, 00:10:23.469 "data_size": 63488 00:10:23.469 } 00:10:23.469 ] 00:10:23.469 }' 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.469 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.728 [2024-11-18 13:26:53.714254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.728 "name": "Existed_Raid", 00:10:23.728 "aliases": [ 00:10:23.728 "318a2fe9-dcbd-4709-a890-29e58084ce7c" 00:10:23.728 ], 00:10:23.728 "product_name": "Raid Volume", 00:10:23.728 "block_size": 512, 00:10:23.728 "num_blocks": 63488, 00:10:23.728 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:23.728 "assigned_rate_limits": { 00:10:23.728 "rw_ios_per_sec": 0, 00:10:23.728 "rw_mbytes_per_sec": 0, 00:10:23.728 "r_mbytes_per_sec": 0, 00:10:23.728 "w_mbytes_per_sec": 0 00:10:23.728 }, 00:10:23.728 "claimed": false, 00:10:23.728 "zoned": false, 00:10:23.728 "supported_io_types": { 00:10:23.728 "read": true, 00:10:23.728 "write": true, 00:10:23.728 "unmap": false, 00:10:23.728 "flush": false, 00:10:23.728 "reset": true, 00:10:23.728 "nvme_admin": false, 00:10:23.728 "nvme_io": false, 00:10:23.728 "nvme_io_md": false, 00:10:23.728 "write_zeroes": true, 00:10:23.728 "zcopy": false, 00:10:23.728 "get_zone_info": false, 00:10:23.728 "zone_management": false, 00:10:23.728 "zone_append": false, 00:10:23.728 "compare": false, 00:10:23.728 "compare_and_write": false, 00:10:23.728 "abort": false, 00:10:23.728 "seek_hole": false, 00:10:23.728 "seek_data": false, 00:10:23.728 "copy": false, 00:10:23.728 "nvme_iov_md": false 00:10:23.728 }, 00:10:23.728 "memory_domains": [ 00:10:23.728 { 00:10:23.728 "dma_device_id": "system", 00:10:23.728 "dma_device_type": 1 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.728 "dma_device_type": 2 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "dma_device_id": "system", 00:10:23.728 "dma_device_type": 1 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.728 "dma_device_type": 2 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "dma_device_id": "system", 00:10:23.728 "dma_device_type": 1 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.728 "dma_device_type": 2 00:10:23.728 } 00:10:23.728 ], 00:10:23.728 "driver_specific": { 00:10:23.728 "raid": { 00:10:23.728 "uuid": "318a2fe9-dcbd-4709-a890-29e58084ce7c", 00:10:23.728 "strip_size_kb": 0, 00:10:23.728 "state": "online", 00:10:23.728 "raid_level": "raid1", 00:10:23.728 "superblock": true, 00:10:23.728 "num_base_bdevs": 3, 00:10:23.728 "num_base_bdevs_discovered": 3, 00:10:23.728 "num_base_bdevs_operational": 3, 00:10:23.728 "base_bdevs_list": [ 00:10:23.728 { 00:10:23.728 "name": "NewBaseBdev", 00:10:23.728 "uuid": "d9e8a887-e4bf-455f-99a7-d4c28b083fcb", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "name": "BaseBdev2", 00:10:23.728 "uuid": "d294dc6e-adc6-4a94-96d9-ceb4bc3fd2b3", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 }, 00:10:23.728 { 00:10:23.728 "name": "BaseBdev3", 00:10:23.728 "uuid": "76733bf8-d3d3-4464-ac17-884dafa13583", 00:10:23.728 "is_configured": true, 00:10:23.728 "data_offset": 2048, 00:10:23.728 "data_size": 63488 00:10:23.728 } 00:10:23.728 ] 00:10:23.728 } 00:10:23.728 } 00:10:23.728 }' 00:10:23.728 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.987 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:23.987 BaseBdev2 00:10:23.987 BaseBdev3' 00:10:23.987 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.988 13:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.988 [2024-11-18 13:26:54.001517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.988 [2024-11-18 13:26:54.001568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.988 [2024-11-18 13:26:54.001647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.988 [2024-11-18 13:26:54.001922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.988 [2024-11-18 13:26:54.001933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68034 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68034 ']' 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68034 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.988 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68034 00:10:24.247 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.247 killing process with pid 68034 00:10:24.247 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.247 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68034' 00:10:24.247 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68034 00:10:24.247 [2024-11-18 13:26:54.051780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.247 13:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68034 00:10:24.506 [2024-11-18 13:26:54.352814] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.443 13:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:25.443 00:10:25.443 real 0m10.646s 00:10:25.443 user 0m16.916s 00:10:25.443 sys 0m1.919s 00:10:25.443 13:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.443 13:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.443 ************************************ 00:10:25.443 END TEST raid_state_function_test_sb 00:10:25.443 ************************************ 00:10:25.702 13:26:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:25.702 13:26:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.703 13:26:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.703 13:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.703 ************************************ 00:10:25.703 START TEST raid_superblock_test 00:10:25.703 ************************************ 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68656 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68656 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68656 ']' 00:10:25.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.703 13:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.703 [2024-11-18 13:26:55.638874] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:25.703 [2024-11-18 13:26:55.639119] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68656 ] 00:10:25.962 [2024-11-18 13:26:55.817528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.962 [2024-11-18 13:26:55.927955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.222 [2024-11-18 13:26:56.130066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.222 [2024-11-18 13:26:56.130139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 malloc1 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.482 [2024-11-18 13:26:56.515512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:26.482 [2024-11-18 13:26:56.515675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.482 [2024-11-18 13:26:56.515720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:26.482 [2024-11-18 13:26:56.515750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.482 [2024-11-18 13:26:56.517806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.482 [2024-11-18 13:26:56.517877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:26.482 pt1 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.482 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.742 malloc2 00:10:26.742 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.742 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.742 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.742 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.742 [2024-11-18 13:26:56.576177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.742 [2024-11-18 13:26:56.576242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.742 [2024-11-18 13:26:56.576264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:26.742 [2024-11-18 13:26:56.576273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.742 [2024-11-18 13:26:56.578344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.742 [2024-11-18 13:26:56.578495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.742 pt2 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.743 malloc3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.743 [2024-11-18 13:26:56.644950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.743 [2024-11-18 13:26:56.645065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.743 [2024-11-18 13:26:56.645107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:26.743 [2024-11-18 13:26:56.645150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.743 [2024-11-18 13:26:56.647343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.743 [2024-11-18 13:26:56.647431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.743 pt3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.743 [2024-11-18 13:26:56.656979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:26.743 [2024-11-18 13:26:56.658834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.743 [2024-11-18 13:26:56.658941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.743 [2024-11-18 13:26:56.659142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:26.743 [2024-11-18 13:26:56.659195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.743 [2024-11-18 13:26:56.659464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:26.743 [2024-11-18 13:26:56.659666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:26.743 [2024-11-18 13:26:56.659712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:26.743 [2024-11-18 13:26:56.659916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.743 "name": "raid_bdev1", 00:10:26.743 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:26.743 "strip_size_kb": 0, 00:10:26.743 "state": "online", 00:10:26.743 "raid_level": "raid1", 00:10:26.743 "superblock": true, 00:10:26.743 "num_base_bdevs": 3, 00:10:26.743 "num_base_bdevs_discovered": 3, 00:10:26.743 "num_base_bdevs_operational": 3, 00:10:26.743 "base_bdevs_list": [ 00:10:26.743 { 00:10:26.743 "name": "pt1", 00:10:26.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.743 "is_configured": true, 00:10:26.743 "data_offset": 2048, 00:10:26.743 "data_size": 63488 00:10:26.743 }, 00:10:26.743 { 00:10:26.743 "name": "pt2", 00:10:26.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.743 "is_configured": true, 00:10:26.743 "data_offset": 2048, 00:10:26.743 "data_size": 63488 00:10:26.743 }, 00:10:26.743 { 00:10:26.743 "name": "pt3", 00:10:26.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.743 "is_configured": true, 00:10:26.743 "data_offset": 2048, 00:10:26.743 "data_size": 63488 00:10:26.743 } 00:10:26.743 ] 00:10:26.743 }' 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.743 13:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.312 [2024-11-18 13:26:57.072535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.312 "name": "raid_bdev1", 00:10:27.312 "aliases": [ 00:10:27.312 "c9953978-2796-40ad-bc59-287f0c01ddef" 00:10:27.312 ], 00:10:27.312 "product_name": "Raid Volume", 00:10:27.312 "block_size": 512, 00:10:27.312 "num_blocks": 63488, 00:10:27.312 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:27.312 "assigned_rate_limits": { 00:10:27.312 "rw_ios_per_sec": 0, 00:10:27.312 "rw_mbytes_per_sec": 0, 00:10:27.312 "r_mbytes_per_sec": 0, 00:10:27.312 "w_mbytes_per_sec": 0 00:10:27.312 }, 00:10:27.312 "claimed": false, 00:10:27.312 "zoned": false, 00:10:27.312 "supported_io_types": { 00:10:27.312 "read": true, 00:10:27.312 "write": true, 00:10:27.312 "unmap": false, 00:10:27.312 "flush": false, 00:10:27.312 "reset": true, 00:10:27.312 "nvme_admin": false, 00:10:27.312 "nvme_io": false, 00:10:27.312 "nvme_io_md": false, 00:10:27.312 "write_zeroes": true, 00:10:27.312 "zcopy": false, 00:10:27.312 "get_zone_info": false, 00:10:27.312 "zone_management": false, 00:10:27.312 "zone_append": false, 00:10:27.312 "compare": false, 00:10:27.312 "compare_and_write": false, 00:10:27.312 "abort": false, 00:10:27.312 "seek_hole": false, 00:10:27.312 "seek_data": false, 00:10:27.312 "copy": false, 00:10:27.312 "nvme_iov_md": false 00:10:27.312 }, 00:10:27.312 "memory_domains": [ 00:10:27.312 { 00:10:27.312 "dma_device_id": "system", 00:10:27.312 "dma_device_type": 1 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.312 "dma_device_type": 2 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "dma_device_id": "system", 00:10:27.312 "dma_device_type": 1 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.312 "dma_device_type": 2 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "dma_device_id": "system", 00:10:27.312 "dma_device_type": 1 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.312 "dma_device_type": 2 00:10:27.312 } 00:10:27.312 ], 00:10:27.312 "driver_specific": { 00:10:27.312 "raid": { 00:10:27.312 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:27.312 "strip_size_kb": 0, 00:10:27.312 "state": "online", 00:10:27.312 "raid_level": "raid1", 00:10:27.312 "superblock": true, 00:10:27.312 "num_base_bdevs": 3, 00:10:27.312 "num_base_bdevs_discovered": 3, 00:10:27.312 "num_base_bdevs_operational": 3, 00:10:27.312 "base_bdevs_list": [ 00:10:27.312 { 00:10:27.312 "name": "pt1", 00:10:27.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.312 "is_configured": true, 00:10:27.312 "data_offset": 2048, 00:10:27.312 "data_size": 63488 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "name": "pt2", 00:10:27.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.312 "is_configured": true, 00:10:27.312 "data_offset": 2048, 00:10:27.312 "data_size": 63488 00:10:27.312 }, 00:10:27.312 { 00:10:27.312 "name": "pt3", 00:10:27.312 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.312 "is_configured": true, 00:10:27.312 "data_offset": 2048, 00:10:27.312 "data_size": 63488 00:10:27.312 } 00:10:27.312 ] 00:10:27.312 } 00:10:27.312 } 00:10:27.312 }' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:27.312 pt2 00:10:27.312 pt3' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.312 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 [2024-11-18 13:26:57.375949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c9953978-2796-40ad-bc59-287f0c01ddef 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c9953978-2796-40ad-bc59-287f0c01ddef ']' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 [2024-11-18 13:26:57.407631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.573 [2024-11-18 13:26:57.407658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.573 [2024-11-18 13:26:57.407739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.573 [2024-11-18 13:26:57.407809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.573 [2024-11-18 13:26:57.407818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:27.573 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.574 [2024-11-18 13:26:57.563420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:27.574 [2024-11-18 13:26:57.565195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:27.574 [2024-11-18 13:26:57.565244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:27.574 [2024-11-18 13:26:57.565290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:27.574 [2024-11-18 13:26:57.565335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:27.574 [2024-11-18 13:26:57.565353] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:27.574 [2024-11-18 13:26:57.565369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.574 [2024-11-18 13:26:57.565378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:27.574 request: 00:10:27.574 { 00:10:27.574 "name": "raid_bdev1", 00:10:27.574 "raid_level": "raid1", 00:10:27.574 "base_bdevs": [ 00:10:27.574 "malloc1", 00:10:27.574 "malloc2", 00:10:27.574 "malloc3" 00:10:27.574 ], 00:10:27.574 "superblock": false, 00:10:27.574 "method": "bdev_raid_create", 00:10:27.574 "req_id": 1 00:10:27.574 } 00:10:27.574 Got JSON-RPC error response 00:10:27.574 response: 00:10:27.574 { 00:10:27.574 "code": -17, 00:10:27.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:27.574 } 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.574 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.834 [2024-11-18 13:26:57.631250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.834 [2024-11-18 13:26:57.631301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.834 [2024-11-18 13:26:57.631324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:27.834 [2024-11-18 13:26:57.631333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.834 [2024-11-18 13:26:57.633445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.834 [2024-11-18 13:26:57.633479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.834 [2024-11-18 13:26:57.633548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.834 [2024-11-18 13:26:57.633592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.834 pt1 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.834 "name": "raid_bdev1", 00:10:27.834 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:27.834 "strip_size_kb": 0, 00:10:27.834 "state": "configuring", 00:10:27.834 "raid_level": "raid1", 00:10:27.834 "superblock": true, 00:10:27.834 "num_base_bdevs": 3, 00:10:27.834 "num_base_bdevs_discovered": 1, 00:10:27.834 "num_base_bdevs_operational": 3, 00:10:27.834 "base_bdevs_list": [ 00:10:27.834 { 00:10:27.834 "name": "pt1", 00:10:27.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.834 "is_configured": true, 00:10:27.834 "data_offset": 2048, 00:10:27.834 "data_size": 63488 00:10:27.834 }, 00:10:27.834 { 00:10:27.834 "name": null, 00:10:27.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.834 "is_configured": false, 00:10:27.834 "data_offset": 2048, 00:10:27.834 "data_size": 63488 00:10:27.834 }, 00:10:27.834 { 00:10:27.834 "name": null, 00:10:27.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.834 "is_configured": false, 00:10:27.834 "data_offset": 2048, 00:10:27.834 "data_size": 63488 00:10:27.834 } 00:10:27.834 ] 00:10:27.834 }' 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.834 13:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 [2024-11-18 13:26:58.054622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.094 [2024-11-18 13:26:58.054694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.094 [2024-11-18 13:26:58.054718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:28.094 [2024-11-18 13:26:58.054729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.094 [2024-11-18 13:26:58.055211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.094 [2024-11-18 13:26:58.055238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.094 [2024-11-18 13:26:58.055332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.094 [2024-11-18 13:26:58.055357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.094 pt2 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 [2024-11-18 13:26:58.066569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.094 "name": "raid_bdev1", 00:10:28.094 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:28.094 "strip_size_kb": 0, 00:10:28.094 "state": "configuring", 00:10:28.094 "raid_level": "raid1", 00:10:28.094 "superblock": true, 00:10:28.094 "num_base_bdevs": 3, 00:10:28.094 "num_base_bdevs_discovered": 1, 00:10:28.094 "num_base_bdevs_operational": 3, 00:10:28.094 "base_bdevs_list": [ 00:10:28.094 { 00:10:28.094 "name": "pt1", 00:10:28.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.094 "is_configured": true, 00:10:28.094 "data_offset": 2048, 00:10:28.094 "data_size": 63488 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "name": null, 00:10:28.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.094 "is_configured": false, 00:10:28.094 "data_offset": 0, 00:10:28.094 "data_size": 63488 00:10:28.094 }, 00:10:28.094 { 00:10:28.094 "name": null, 00:10:28.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.094 "is_configured": false, 00:10:28.094 "data_offset": 2048, 00:10:28.094 "data_size": 63488 00:10:28.094 } 00:10:28.094 ] 00:10:28.094 }' 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.094 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.664 [2024-11-18 13:26:58.489871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.664 [2024-11-18 13:26:58.489944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.664 [2024-11-18 13:26:58.489969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:28.664 [2024-11-18 13:26:58.489981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.664 [2024-11-18 13:26:58.490452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.664 [2024-11-18 13:26:58.490481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.664 [2024-11-18 13:26:58.490565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:28.664 [2024-11-18 13:26:58.490611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.664 pt2 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.664 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.665 [2024-11-18 13:26:58.501791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:28.665 [2024-11-18 13:26:58.501837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.665 [2024-11-18 13:26:58.501855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:28.665 [2024-11-18 13:26:58.501868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.665 [2024-11-18 13:26:58.502217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.665 [2024-11-18 13:26:58.502244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:28.665 [2024-11-18 13:26:58.502302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:28.665 [2024-11-18 13:26:58.502327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:28.665 [2024-11-18 13:26:58.502462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.665 [2024-11-18 13:26:58.502481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.665 [2024-11-18 13:26:58.502705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.665 [2024-11-18 13:26:58.502867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.665 [2024-11-18 13:26:58.502882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:28.665 [2024-11-18 13:26:58.503014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.665 pt3 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.665 "name": "raid_bdev1", 00:10:28.665 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:28.665 "strip_size_kb": 0, 00:10:28.665 "state": "online", 00:10:28.665 "raid_level": "raid1", 00:10:28.665 "superblock": true, 00:10:28.665 "num_base_bdevs": 3, 00:10:28.665 "num_base_bdevs_discovered": 3, 00:10:28.665 "num_base_bdevs_operational": 3, 00:10:28.665 "base_bdevs_list": [ 00:10:28.665 { 00:10:28.665 "name": "pt1", 00:10:28.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:28.665 "is_configured": true, 00:10:28.665 "data_offset": 2048, 00:10:28.665 "data_size": 63488 00:10:28.665 }, 00:10:28.665 { 00:10:28.665 "name": "pt2", 00:10:28.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.665 "is_configured": true, 00:10:28.665 "data_offset": 2048, 00:10:28.665 "data_size": 63488 00:10:28.665 }, 00:10:28.665 { 00:10:28.665 "name": "pt3", 00:10:28.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.665 "is_configured": true, 00:10:28.665 "data_offset": 2048, 00:10:28.665 "data_size": 63488 00:10:28.665 } 00:10:28.665 ] 00:10:28.665 }' 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.665 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.925 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.925 [2024-11-18 13:26:58.957359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.186 13:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.186 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.186 "name": "raid_bdev1", 00:10:29.186 "aliases": [ 00:10:29.186 "c9953978-2796-40ad-bc59-287f0c01ddef" 00:10:29.186 ], 00:10:29.186 "product_name": "Raid Volume", 00:10:29.186 "block_size": 512, 00:10:29.186 "num_blocks": 63488, 00:10:29.186 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:29.186 "assigned_rate_limits": { 00:10:29.186 "rw_ios_per_sec": 0, 00:10:29.186 "rw_mbytes_per_sec": 0, 00:10:29.186 "r_mbytes_per_sec": 0, 00:10:29.186 "w_mbytes_per_sec": 0 00:10:29.186 }, 00:10:29.186 "claimed": false, 00:10:29.186 "zoned": false, 00:10:29.186 "supported_io_types": { 00:10:29.186 "read": true, 00:10:29.186 "write": true, 00:10:29.186 "unmap": false, 00:10:29.186 "flush": false, 00:10:29.186 "reset": true, 00:10:29.186 "nvme_admin": false, 00:10:29.186 "nvme_io": false, 00:10:29.186 "nvme_io_md": false, 00:10:29.186 "write_zeroes": true, 00:10:29.186 "zcopy": false, 00:10:29.186 "get_zone_info": false, 00:10:29.186 "zone_management": false, 00:10:29.186 "zone_append": false, 00:10:29.186 "compare": false, 00:10:29.186 "compare_and_write": false, 00:10:29.186 "abort": false, 00:10:29.186 "seek_hole": false, 00:10:29.186 "seek_data": false, 00:10:29.186 "copy": false, 00:10:29.186 "nvme_iov_md": false 00:10:29.186 }, 00:10:29.186 "memory_domains": [ 00:10:29.186 { 00:10:29.186 "dma_device_id": "system", 00:10:29.186 "dma_device_type": 1 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.186 "dma_device_type": 2 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "dma_device_id": "system", 00:10:29.186 "dma_device_type": 1 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.186 "dma_device_type": 2 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "dma_device_id": "system", 00:10:29.186 "dma_device_type": 1 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.186 "dma_device_type": 2 00:10:29.186 } 00:10:29.186 ], 00:10:29.186 "driver_specific": { 00:10:29.186 "raid": { 00:10:29.186 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:29.186 "strip_size_kb": 0, 00:10:29.186 "state": "online", 00:10:29.186 "raid_level": "raid1", 00:10:29.186 "superblock": true, 00:10:29.186 "num_base_bdevs": 3, 00:10:29.186 "num_base_bdevs_discovered": 3, 00:10:29.186 "num_base_bdevs_operational": 3, 00:10:29.186 "base_bdevs_list": [ 00:10:29.186 { 00:10:29.186 "name": "pt1", 00:10:29.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.186 "is_configured": true, 00:10:29.186 "data_offset": 2048, 00:10:29.186 "data_size": 63488 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "name": "pt2", 00:10:29.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.186 "is_configured": true, 00:10:29.186 "data_offset": 2048, 00:10:29.186 "data_size": 63488 00:10:29.186 }, 00:10:29.186 { 00:10:29.186 "name": "pt3", 00:10:29.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.186 "is_configured": true, 00:10:29.186 "data_offset": 2048, 00:10:29.186 "data_size": 63488 00:10:29.186 } 00:10:29.186 ] 00:10:29.186 } 00:10:29.186 } 00:10:29.186 }' 00:10:29.186 13:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:29.186 pt2 00:10:29.186 pt3' 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.186 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.187 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.447 [2024-11-18 13:26:59.244797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c9953978-2796-40ad-bc59-287f0c01ddef '!=' c9953978-2796-40ad-bc59-287f0c01ddef ']' 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.447 [2024-11-18 13:26:59.292481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.447 "name": "raid_bdev1", 00:10:29.447 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:29.447 "strip_size_kb": 0, 00:10:29.447 "state": "online", 00:10:29.447 "raid_level": "raid1", 00:10:29.447 "superblock": true, 00:10:29.447 "num_base_bdevs": 3, 00:10:29.447 "num_base_bdevs_discovered": 2, 00:10:29.447 "num_base_bdevs_operational": 2, 00:10:29.447 "base_bdevs_list": [ 00:10:29.447 { 00:10:29.447 "name": null, 00:10:29.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.447 "is_configured": false, 00:10:29.447 "data_offset": 0, 00:10:29.447 "data_size": 63488 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": "pt2", 00:10:29.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.447 "is_configured": true, 00:10:29.447 "data_offset": 2048, 00:10:29.447 "data_size": 63488 00:10:29.447 }, 00:10:29.447 { 00:10:29.447 "name": "pt3", 00:10:29.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.447 "is_configured": true, 00:10:29.447 "data_offset": 2048, 00:10:29.447 "data_size": 63488 00:10:29.447 } 00:10:29.447 ] 00:10:29.447 }' 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.447 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.707 [2024-11-18 13:26:59.719710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.707 [2024-11-18 13:26:59.719749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.707 [2024-11-18 13:26:59.719847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.707 [2024-11-18 13:26:59.719907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.707 [2024-11-18 13:26:59.719929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:29.707 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 [2024-11-18 13:26:59.803517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.967 [2024-11-18 13:26:59.803574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.967 [2024-11-18 13:26:59.803590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:29.967 [2024-11-18 13:26:59.803601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.967 [2024-11-18 13:26:59.805799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.967 [2024-11-18 13:26:59.805839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.967 [2024-11-18 13:26:59.805911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:29.967 [2024-11-18 13:26:59.805953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.967 pt2 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.967 "name": "raid_bdev1", 00:10:29.967 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:29.967 "strip_size_kb": 0, 00:10:29.967 "state": "configuring", 00:10:29.967 "raid_level": "raid1", 00:10:29.967 "superblock": true, 00:10:29.967 "num_base_bdevs": 3, 00:10:29.967 "num_base_bdevs_discovered": 1, 00:10:29.967 "num_base_bdevs_operational": 2, 00:10:29.967 "base_bdevs_list": [ 00:10:29.967 { 00:10:29.967 "name": null, 00:10:29.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.967 "is_configured": false, 00:10:29.967 "data_offset": 2048, 00:10:29.967 "data_size": 63488 00:10:29.967 }, 00:10:29.967 { 00:10:29.967 "name": "pt2", 00:10:29.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.967 "is_configured": true, 00:10:29.967 "data_offset": 2048, 00:10:29.967 "data_size": 63488 00:10:29.967 }, 00:10:29.967 { 00:10:29.967 "name": null, 00:10:29.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:29.967 "is_configured": false, 00:10:29.967 "data_offset": 2048, 00:10:29.967 "data_size": 63488 00:10:29.967 } 00:10:29.967 ] 00:10:29.967 }' 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.967 13:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.288 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 [2024-11-18 13:27:00.270836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:30.288 [2024-11-18 13:27:00.270917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.288 [2024-11-18 13:27:00.270942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:30.288 [2024-11-18 13:27:00.270955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.288 [2024-11-18 13:27:00.271467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.288 [2024-11-18 13:27:00.271499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:30.289 [2024-11-18 13:27:00.271601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:30.289 [2024-11-18 13:27:00.271639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:30.289 [2024-11-18 13:27:00.271774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.289 [2024-11-18 13:27:00.271791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.289 [2024-11-18 13:27:00.272051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:30.289 [2024-11-18 13:27:00.272227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.289 [2024-11-18 13:27:00.272243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:30.289 [2024-11-18 13:27:00.272392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.289 pt3 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.289 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.548 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.548 "name": "raid_bdev1", 00:10:30.548 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:30.548 "strip_size_kb": 0, 00:10:30.548 "state": "online", 00:10:30.548 "raid_level": "raid1", 00:10:30.548 "superblock": true, 00:10:30.548 "num_base_bdevs": 3, 00:10:30.548 "num_base_bdevs_discovered": 2, 00:10:30.548 "num_base_bdevs_operational": 2, 00:10:30.548 "base_bdevs_list": [ 00:10:30.548 { 00:10:30.548 "name": null, 00:10:30.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.548 "is_configured": false, 00:10:30.548 "data_offset": 2048, 00:10:30.548 "data_size": 63488 00:10:30.548 }, 00:10:30.548 { 00:10:30.548 "name": "pt2", 00:10:30.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.548 "is_configured": true, 00:10:30.548 "data_offset": 2048, 00:10:30.548 "data_size": 63488 00:10:30.548 }, 00:10:30.548 { 00:10:30.549 "name": "pt3", 00:10:30.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:30.549 "is_configured": true, 00:10:30.549 "data_offset": 2048, 00:10:30.549 "data_size": 63488 00:10:30.549 } 00:10:30.549 ] 00:10:30.549 }' 00:10:30.549 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.549 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 [2024-11-18 13:27:00.745994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.808 [2024-11-18 13:27:00.746040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.808 [2024-11-18 13:27:00.746138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.808 [2024-11-18 13:27:00.746202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.808 [2024-11-18 13:27:00.746212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.808 [2024-11-18 13:27:00.805873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:30.808 [2024-11-18 13:27:00.805931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.808 [2024-11-18 13:27:00.805951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:30.808 [2024-11-18 13:27:00.805959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.808 [2024-11-18 13:27:00.808149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.808 [2024-11-18 13:27:00.808182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:30.808 [2024-11-18 13:27:00.808256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:30.808 [2024-11-18 13:27:00.808296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:30.808 [2024-11-18 13:27:00.808414] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:30.808 [2024-11-18 13:27:00.808424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.808 [2024-11-18 13:27:00.808439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:30.808 [2024-11-18 13:27:00.808487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.808 pt1 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.808 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.809 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.809 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.809 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.068 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.068 "name": "raid_bdev1", 00:10:31.068 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:31.068 "strip_size_kb": 0, 00:10:31.068 "state": "configuring", 00:10:31.068 "raid_level": "raid1", 00:10:31.068 "superblock": true, 00:10:31.068 "num_base_bdevs": 3, 00:10:31.068 "num_base_bdevs_discovered": 1, 00:10:31.068 "num_base_bdevs_operational": 2, 00:10:31.068 "base_bdevs_list": [ 00:10:31.068 { 00:10:31.068 "name": null, 00:10:31.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.068 "is_configured": false, 00:10:31.068 "data_offset": 2048, 00:10:31.068 "data_size": 63488 00:10:31.068 }, 00:10:31.068 { 00:10:31.068 "name": "pt2", 00:10:31.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.068 "is_configured": true, 00:10:31.068 "data_offset": 2048, 00:10:31.068 "data_size": 63488 00:10:31.068 }, 00:10:31.068 { 00:10:31.068 "name": null, 00:10:31.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.068 "is_configured": false, 00:10:31.068 "data_offset": 2048, 00:10:31.068 "data_size": 63488 00:10:31.068 } 00:10:31.068 ] 00:10:31.068 }' 00:10:31.068 13:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.068 13:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.328 [2024-11-18 13:27:01.277086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.328 [2024-11-18 13:27:01.277157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.328 [2024-11-18 13:27:01.277179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:31.328 [2024-11-18 13:27:01.277194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.328 [2024-11-18 13:27:01.277646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.328 [2024-11-18 13:27:01.277671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.328 [2024-11-18 13:27:01.277749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:31.328 [2024-11-18 13:27:01.277797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.328 [2024-11-18 13:27:01.277925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:31.328 [2024-11-18 13:27:01.277939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.328 [2024-11-18 13:27:01.278198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:31.328 [2024-11-18 13:27:01.278360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:31.328 [2024-11-18 13:27:01.278379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:31.328 [2024-11-18 13:27:01.278522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.328 pt3 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.328 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.328 "name": "raid_bdev1", 00:10:31.328 "uuid": "c9953978-2796-40ad-bc59-287f0c01ddef", 00:10:31.328 "strip_size_kb": 0, 00:10:31.328 "state": "online", 00:10:31.328 "raid_level": "raid1", 00:10:31.328 "superblock": true, 00:10:31.328 "num_base_bdevs": 3, 00:10:31.328 "num_base_bdevs_discovered": 2, 00:10:31.328 "num_base_bdevs_operational": 2, 00:10:31.328 "base_bdevs_list": [ 00:10:31.328 { 00:10:31.328 "name": null, 00:10:31.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.328 "is_configured": false, 00:10:31.328 "data_offset": 2048, 00:10:31.328 "data_size": 63488 00:10:31.328 }, 00:10:31.328 { 00:10:31.328 "name": "pt2", 00:10:31.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.328 "is_configured": true, 00:10:31.329 "data_offset": 2048, 00:10:31.329 "data_size": 63488 00:10:31.329 }, 00:10:31.329 { 00:10:31.329 "name": "pt3", 00:10:31.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.329 "is_configured": true, 00:10:31.329 "data_offset": 2048, 00:10:31.329 "data_size": 63488 00:10:31.329 } 00:10:31.329 ] 00:10:31.329 }' 00:10:31.329 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.329 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.896 [2024-11-18 13:27:01.784532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c9953978-2796-40ad-bc59-287f0c01ddef '!=' c9953978-2796-40ad-bc59-287f0c01ddef ']' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68656 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68656 ']' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68656 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68656 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.896 killing process with pid 68656 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68656' 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68656 00:10:31.896 [2024-11-18 13:27:01.858704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.896 [2024-11-18 13:27:01.858802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.896 [2024-11-18 13:27:01.858868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.896 [2024-11-18 13:27:01.858885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:31.896 13:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68656 00:10:32.155 [2024-11-18 13:27:02.156581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.532 13:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:33.533 00:10:33.533 real 0m7.717s 00:10:33.533 user 0m12.098s 00:10:33.533 sys 0m1.409s 00:10:33.533 13:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.533 13:27:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.533 ************************************ 00:10:33.533 END TEST raid_superblock_test 00:10:33.533 ************************************ 00:10:33.533 13:27:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:33.533 13:27:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.533 13:27:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.533 13:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.533 ************************************ 00:10:33.533 START TEST raid_read_error_test 00:10:33.533 ************************************ 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oPaL9gTiQ5 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69100 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69100 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69100 ']' 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.533 13:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.533 [2024-11-18 13:27:03.429412] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:33.533 [2024-11-18 13:27:03.429511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69100 ] 00:10:33.792 [2024-11-18 13:27:03.603754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.792 [2024-11-18 13:27:03.719855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.050 [2024-11-18 13:27:03.915135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.051 [2024-11-18 13:27:03.915184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.310 BaseBdev1_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.310 true 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.310 [2024-11-18 13:27:04.308928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.310 [2024-11-18 13:27:04.308987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.310 [2024-11-18 13:27:04.309006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.310 [2024-11-18 13:27:04.309017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.310 [2024-11-18 13:27:04.311092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.310 [2024-11-18 13:27:04.311159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.310 BaseBdev1 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.310 BaseBdev2_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.310 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 true 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 [2024-11-18 13:27:04.376313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.569 [2024-11-18 13:27:04.376368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.569 [2024-11-18 13:27:04.376383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.569 [2024-11-18 13:27:04.376393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.569 [2024-11-18 13:27:04.378391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.569 [2024-11-18 13:27:04.378425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.569 BaseBdev2 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 BaseBdev3_malloc 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 true 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.569 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.569 [2024-11-18 13:27:04.452992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.569 [2024-11-18 13:27:04.453045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.569 [2024-11-18 13:27:04.453061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:34.570 [2024-11-18 13:27:04.453071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.570 [2024-11-18 13:27:04.455167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.570 [2024-11-18 13:27:04.455206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.570 BaseBdev3 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.570 [2024-11-18 13:27:04.465039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.570 [2024-11-18 13:27:04.466800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.570 [2024-11-18 13:27:04.466872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.570 [2024-11-18 13:27:04.467060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.570 [2024-11-18 13:27:04.467087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.570 [2024-11-18 13:27:04.467329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:34.570 [2024-11-18 13:27:04.467502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.570 [2024-11-18 13:27:04.467521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:34.570 [2024-11-18 13:27:04.467668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.570 "name": "raid_bdev1", 00:10:34.570 "uuid": "bac0657d-7d64-4247-9461-7c015598dd69", 00:10:34.570 "strip_size_kb": 0, 00:10:34.570 "state": "online", 00:10:34.570 "raid_level": "raid1", 00:10:34.570 "superblock": true, 00:10:34.570 "num_base_bdevs": 3, 00:10:34.570 "num_base_bdevs_discovered": 3, 00:10:34.570 "num_base_bdevs_operational": 3, 00:10:34.570 "base_bdevs_list": [ 00:10:34.570 { 00:10:34.570 "name": "BaseBdev1", 00:10:34.570 "uuid": "773cc485-cde2-5d37-8875-386cbe3c9344", 00:10:34.570 "is_configured": true, 00:10:34.570 "data_offset": 2048, 00:10:34.570 "data_size": 63488 00:10:34.570 }, 00:10:34.570 { 00:10:34.570 "name": "BaseBdev2", 00:10:34.570 "uuid": "8c1e899e-297e-56ae-9e29-f9cd9788934d", 00:10:34.570 "is_configured": true, 00:10:34.570 "data_offset": 2048, 00:10:34.570 "data_size": 63488 00:10:34.570 }, 00:10:34.570 { 00:10:34.570 "name": "BaseBdev3", 00:10:34.570 "uuid": "4f6f9c14-3ba1-53c8-91e5-fbc15bbb2ab7", 00:10:34.570 "is_configured": true, 00:10:34.570 "data_offset": 2048, 00:10:34.570 "data_size": 63488 00:10:34.570 } 00:10:34.570 ] 00:10:34.570 }' 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.570 13:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.138 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.138 13:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.138 [2024-11-18 13:27:04.985290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.076 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.077 "name": "raid_bdev1", 00:10:36.077 "uuid": "bac0657d-7d64-4247-9461-7c015598dd69", 00:10:36.077 "strip_size_kb": 0, 00:10:36.077 "state": "online", 00:10:36.077 "raid_level": "raid1", 00:10:36.077 "superblock": true, 00:10:36.077 "num_base_bdevs": 3, 00:10:36.077 "num_base_bdevs_discovered": 3, 00:10:36.077 "num_base_bdevs_operational": 3, 00:10:36.077 "base_bdevs_list": [ 00:10:36.077 { 00:10:36.077 "name": "BaseBdev1", 00:10:36.077 "uuid": "773cc485-cde2-5d37-8875-386cbe3c9344", 00:10:36.077 "is_configured": true, 00:10:36.077 "data_offset": 2048, 00:10:36.077 "data_size": 63488 00:10:36.077 }, 00:10:36.077 { 00:10:36.077 "name": "BaseBdev2", 00:10:36.077 "uuid": "8c1e899e-297e-56ae-9e29-f9cd9788934d", 00:10:36.077 "is_configured": true, 00:10:36.077 "data_offset": 2048, 00:10:36.077 "data_size": 63488 00:10:36.077 }, 00:10:36.077 { 00:10:36.077 "name": "BaseBdev3", 00:10:36.077 "uuid": "4f6f9c14-3ba1-53c8-91e5-fbc15bbb2ab7", 00:10:36.077 "is_configured": true, 00:10:36.077 "data_offset": 2048, 00:10:36.077 "data_size": 63488 00:10:36.077 } 00:10:36.077 ] 00:10:36.077 }' 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.077 13:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.645 [2024-11-18 13:27:06.396688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.645 [2024-11-18 13:27:06.396732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.645 [2024-11-18 13:27:06.399321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.645 [2024-11-18 13:27:06.399370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.645 [2024-11-18 13:27:06.399472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.645 [2024-11-18 13:27:06.399483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:36.645 { 00:10:36.645 "results": [ 00:10:36.645 { 00:10:36.645 "job": "raid_bdev1", 00:10:36.645 "core_mask": "0x1", 00:10:36.645 "workload": "randrw", 00:10:36.645 "percentage": 50, 00:10:36.645 "status": "finished", 00:10:36.645 "queue_depth": 1, 00:10:36.645 "io_size": 131072, 00:10:36.645 "runtime": 1.412349, 00:10:36.645 "iops": 13665.885698223314, 00:10:36.645 "mibps": 1708.2357122779142, 00:10:36.645 "io_failed": 0, 00:10:36.645 "io_timeout": 0, 00:10:36.645 "avg_latency_us": 70.6327972236658, 00:10:36.645 "min_latency_us": 22.134497816593885, 00:10:36.645 "max_latency_us": 1523.926637554585 00:10:36.645 } 00:10:36.645 ], 00:10:36.645 "core_count": 1 00:10:36.645 } 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69100 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69100 ']' 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69100 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69100 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.645 killing process with pid 69100 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69100' 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69100 00:10:36.645 [2024-11-18 13:27:06.445854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.645 13:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69100 00:10:36.645 [2024-11-18 13:27:06.670757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oPaL9gTiQ5 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:38.025 00:10:38.025 real 0m4.510s 00:10:38.025 user 0m5.333s 00:10:38.025 sys 0m0.601s 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.025 13:27:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 ************************************ 00:10:38.025 END TEST raid_read_error_test 00:10:38.025 ************************************ 00:10:38.025 13:27:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:38.025 13:27:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.025 13:27:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.025 13:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 ************************************ 00:10:38.025 START TEST raid_write_error_test 00:10:38.025 ************************************ 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2LHFsty8Qm 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69246 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69246 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69246 ']' 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.025 13:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.025 [2024-11-18 13:27:08.009711] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:38.025 [2024-11-18 13:27:08.009821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69246 ] 00:10:38.285 [2024-11-18 13:27:08.184892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.285 [2024-11-18 13:27:08.296683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.545 [2024-11-18 13:27:08.492478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.545 [2024-11-18 13:27:08.492521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.805 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 BaseBdev1_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 true 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 [2024-11-18 13:27:08.900022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:39.065 [2024-11-18 13:27:08.900091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.065 [2024-11-18 13:27:08.900116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:39.065 [2024-11-18 13:27:08.900137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.065 [2024-11-18 13:27:08.902245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.065 [2024-11-18 13:27:08.902283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:39.065 BaseBdev1 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 BaseBdev2_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 true 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 [2024-11-18 13:27:08.965320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.065 [2024-11-18 13:27:08.965376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.065 [2024-11-18 13:27:08.965393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:39.065 [2024-11-18 13:27:08.965403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.065 [2024-11-18 13:27:08.967401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.065 [2024-11-18 13:27:08.967437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.065 BaseBdev2 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 BaseBdev3_malloc 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 true 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.065 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.065 [2024-11-18 13:27:09.043250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:39.065 [2024-11-18 13:27:09.043302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.065 [2024-11-18 13:27:09.043318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:39.066 [2024-11-18 13:27:09.043328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.066 [2024-11-18 13:27:09.045316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.066 [2024-11-18 13:27:09.045350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:39.066 BaseBdev3 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.066 [2024-11-18 13:27:09.055292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.066 [2024-11-18 13:27:09.056998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.066 [2024-11-18 13:27:09.057068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.066 [2024-11-18 13:27:09.057281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.066 [2024-11-18 13:27:09.057295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.066 [2024-11-18 13:27:09.057521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:39.066 [2024-11-18 13:27:09.057694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.066 [2024-11-18 13:27:09.057713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:39.066 [2024-11-18 13:27:09.057863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.066 "name": "raid_bdev1", 00:10:39.066 "uuid": "297cde54-00f9-46c5-9ce4-21a52dc4094b", 00:10:39.066 "strip_size_kb": 0, 00:10:39.066 "state": "online", 00:10:39.066 "raid_level": "raid1", 00:10:39.066 "superblock": true, 00:10:39.066 "num_base_bdevs": 3, 00:10:39.066 "num_base_bdevs_discovered": 3, 00:10:39.066 "num_base_bdevs_operational": 3, 00:10:39.066 "base_bdevs_list": [ 00:10:39.066 { 00:10:39.066 "name": "BaseBdev1", 00:10:39.066 "uuid": "e4a10772-fb0f-54ff-b7d4-29a4ad86077d", 00:10:39.066 "is_configured": true, 00:10:39.066 "data_offset": 2048, 00:10:39.066 "data_size": 63488 00:10:39.066 }, 00:10:39.066 { 00:10:39.066 "name": "BaseBdev2", 00:10:39.066 "uuid": "248a413d-3196-5b49-8ff3-6dc03d758f0d", 00:10:39.066 "is_configured": true, 00:10:39.066 "data_offset": 2048, 00:10:39.066 "data_size": 63488 00:10:39.066 }, 00:10:39.066 { 00:10:39.066 "name": "BaseBdev3", 00:10:39.066 "uuid": "cf685fcc-c017-59e0-9f9e-5c24ce33b5a8", 00:10:39.066 "is_configured": true, 00:10:39.066 "data_offset": 2048, 00:10:39.066 "data_size": 63488 00:10:39.066 } 00:10:39.066 ] 00:10:39.066 }' 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.066 13:27:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.635 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.635 13:27:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:39.635 [2024-11-18 13:27:09.631833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 [2024-11-18 13:27:10.534556] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:40.575 [2024-11-18 13:27:10.534611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.575 [2024-11-18 13:27:10.534824] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.575 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.575 "name": "raid_bdev1", 00:10:40.575 "uuid": "297cde54-00f9-46c5-9ce4-21a52dc4094b", 00:10:40.575 "strip_size_kb": 0, 00:10:40.575 "state": "online", 00:10:40.575 "raid_level": "raid1", 00:10:40.575 "superblock": true, 00:10:40.575 "num_base_bdevs": 3, 00:10:40.575 "num_base_bdevs_discovered": 2, 00:10:40.575 "num_base_bdevs_operational": 2, 00:10:40.575 "base_bdevs_list": [ 00:10:40.575 { 00:10:40.575 "name": null, 00:10:40.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.575 "is_configured": false, 00:10:40.575 "data_offset": 0, 00:10:40.575 "data_size": 63488 00:10:40.575 }, 00:10:40.575 { 00:10:40.575 "name": "BaseBdev2", 00:10:40.575 "uuid": "248a413d-3196-5b49-8ff3-6dc03d758f0d", 00:10:40.575 "is_configured": true, 00:10:40.575 "data_offset": 2048, 00:10:40.575 "data_size": 63488 00:10:40.576 }, 00:10:40.576 { 00:10:40.576 "name": "BaseBdev3", 00:10:40.576 "uuid": "cf685fcc-c017-59e0-9f9e-5c24ce33b5a8", 00:10:40.576 "is_configured": true, 00:10:40.576 "data_offset": 2048, 00:10:40.576 "data_size": 63488 00:10:40.576 } 00:10:40.576 ] 00:10:40.576 }' 00:10:40.576 13:27:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.576 13:27:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.146 [2024-11-18 13:27:11.032912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.146 [2024-11-18 13:27:11.032956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.146 [2024-11-18 13:27:11.035603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.146 [2024-11-18 13:27:11.035667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.146 [2024-11-18 13:27:11.035757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.146 [2024-11-18 13:27:11.035773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.146 { 00:10:41.146 "results": [ 00:10:41.146 { 00:10:41.146 "job": "raid_bdev1", 00:10:41.146 "core_mask": "0x1", 00:10:41.146 "workload": "randrw", 00:10:41.146 "percentage": 50, 00:10:41.146 "status": "finished", 00:10:41.146 "queue_depth": 1, 00:10:41.146 "io_size": 131072, 00:10:41.146 "runtime": 1.402103, 00:10:41.146 "iops": 15168.643102539543, 00:10:41.146 "mibps": 1896.080387817443, 00:10:41.146 "io_failed": 0, 00:10:41.146 "io_timeout": 0, 00:10:41.146 "avg_latency_us": 63.375384056905716, 00:10:41.146 "min_latency_us": 23.252401746724892, 00:10:41.146 "max_latency_us": 1380.8349344978167 00:10:41.146 } 00:10:41.146 ], 00:10:41.146 "core_count": 1 00:10:41.146 } 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69246 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69246 ']' 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69246 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69246 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.146 killing process with pid 69246 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69246' 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69246 00:10:41.146 [2024-11-18 13:27:11.084511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.146 13:27:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69246 00:10:41.408 [2024-11-18 13:27:11.307388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2LHFsty8Qm 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:42.795 00:10:42.795 real 0m4.561s 00:10:42.795 user 0m5.470s 00:10:42.795 sys 0m0.578s 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.795 13:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.795 ************************************ 00:10:42.795 END TEST raid_write_error_test 00:10:42.795 ************************************ 00:10:42.795 13:27:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:42.795 13:27:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:42.795 13:27:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:42.795 13:27:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.795 13:27:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.795 13:27:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.795 ************************************ 00:10:42.795 START TEST raid_state_function_test 00:10:42.795 ************************************ 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.795 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69384 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69384' 00:10:42.796 Process raid pid: 69384 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69384 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69384 ']' 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.796 13:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.796 [2024-11-18 13:27:12.647855] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:42.796 [2024-11-18 13:27:12.647990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.796 [2024-11-18 13:27:12.826292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.056 [2024-11-18 13:27:12.942369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.315 [2024-11-18 13:27:13.149013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.315 [2024-11-18 13:27:13.149054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.575 [2024-11-18 13:27:13.489527] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.575 [2024-11-18 13:27:13.489584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.575 [2024-11-18 13:27:13.489594] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.575 [2024-11-18 13:27:13.489604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.575 [2024-11-18 13:27:13.489610] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.575 [2024-11-18 13:27:13.489619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.575 [2024-11-18 13:27:13.489625] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.575 [2024-11-18 13:27:13.489633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.575 "name": "Existed_Raid", 00:10:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.575 "strip_size_kb": 64, 00:10:43.575 "state": "configuring", 00:10:43.575 "raid_level": "raid0", 00:10:43.575 "superblock": false, 00:10:43.575 "num_base_bdevs": 4, 00:10:43.575 "num_base_bdevs_discovered": 0, 00:10:43.575 "num_base_bdevs_operational": 4, 00:10:43.575 "base_bdevs_list": [ 00:10:43.575 { 00:10:43.575 "name": "BaseBdev1", 00:10:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.575 "is_configured": false, 00:10:43.575 "data_offset": 0, 00:10:43.575 "data_size": 0 00:10:43.575 }, 00:10:43.575 { 00:10:43.575 "name": "BaseBdev2", 00:10:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.575 "is_configured": false, 00:10:43.575 "data_offset": 0, 00:10:43.575 "data_size": 0 00:10:43.575 }, 00:10:43.575 { 00:10:43.575 "name": "BaseBdev3", 00:10:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.575 "is_configured": false, 00:10:43.575 "data_offset": 0, 00:10:43.575 "data_size": 0 00:10:43.575 }, 00:10:43.575 { 00:10:43.575 "name": "BaseBdev4", 00:10:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.575 "is_configured": false, 00:10:43.575 "data_offset": 0, 00:10:43.575 "data_size": 0 00:10:43.575 } 00:10:43.575 ] 00:10:43.575 }' 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.575 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.144 [2024-11-18 13:27:13.956679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.144 [2024-11-18 13:27:13.956727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.144 [2024-11-18 13:27:13.964641] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.144 [2024-11-18 13:27:13.964683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.144 [2024-11-18 13:27:13.964692] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.144 [2024-11-18 13:27:13.964702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.144 [2024-11-18 13:27:13.964708] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.144 [2024-11-18 13:27:13.964717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.144 [2024-11-18 13:27:13.964723] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.144 [2024-11-18 13:27:13.964731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.144 13:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 [2024-11-18 13:27:14.008581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.145 BaseBdev1 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 [ 00:10:44.145 { 00:10:44.145 "name": "BaseBdev1", 00:10:44.145 "aliases": [ 00:10:44.145 "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c" 00:10:44.145 ], 00:10:44.145 "product_name": "Malloc disk", 00:10:44.145 "block_size": 512, 00:10:44.145 "num_blocks": 65536, 00:10:44.145 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:44.145 "assigned_rate_limits": { 00:10:44.145 "rw_ios_per_sec": 0, 00:10:44.145 "rw_mbytes_per_sec": 0, 00:10:44.145 "r_mbytes_per_sec": 0, 00:10:44.145 "w_mbytes_per_sec": 0 00:10:44.145 }, 00:10:44.145 "claimed": true, 00:10:44.145 "claim_type": "exclusive_write", 00:10:44.145 "zoned": false, 00:10:44.145 "supported_io_types": { 00:10:44.145 "read": true, 00:10:44.145 "write": true, 00:10:44.145 "unmap": true, 00:10:44.145 "flush": true, 00:10:44.145 "reset": true, 00:10:44.145 "nvme_admin": false, 00:10:44.145 "nvme_io": false, 00:10:44.145 "nvme_io_md": false, 00:10:44.145 "write_zeroes": true, 00:10:44.145 "zcopy": true, 00:10:44.145 "get_zone_info": false, 00:10:44.145 "zone_management": false, 00:10:44.145 "zone_append": false, 00:10:44.145 "compare": false, 00:10:44.145 "compare_and_write": false, 00:10:44.145 "abort": true, 00:10:44.145 "seek_hole": false, 00:10:44.145 "seek_data": false, 00:10:44.145 "copy": true, 00:10:44.145 "nvme_iov_md": false 00:10:44.145 }, 00:10:44.145 "memory_domains": [ 00:10:44.145 { 00:10:44.145 "dma_device_id": "system", 00:10:44.145 "dma_device_type": 1 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.145 "dma_device_type": 2 00:10:44.145 } 00:10:44.145 ], 00:10:44.145 "driver_specific": {} 00:10:44.145 } 00:10:44.145 ] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.145 "name": "Existed_Raid", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.145 "strip_size_kb": 64, 00:10:44.145 "state": "configuring", 00:10:44.145 "raid_level": "raid0", 00:10:44.145 "superblock": false, 00:10:44.145 "num_base_bdevs": 4, 00:10:44.145 "num_base_bdevs_discovered": 1, 00:10:44.145 "num_base_bdevs_operational": 4, 00:10:44.145 "base_bdevs_list": [ 00:10:44.145 { 00:10:44.145 "name": "BaseBdev1", 00:10:44.145 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:44.145 "is_configured": true, 00:10:44.145 "data_offset": 0, 00:10:44.145 "data_size": 65536 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "BaseBdev2", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.145 "is_configured": false, 00:10:44.145 "data_offset": 0, 00:10:44.145 "data_size": 0 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "BaseBdev3", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.145 "is_configured": false, 00:10:44.145 "data_offset": 0, 00:10:44.145 "data_size": 0 00:10:44.145 }, 00:10:44.145 { 00:10:44.145 "name": "BaseBdev4", 00:10:44.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.145 "is_configured": false, 00:10:44.145 "data_offset": 0, 00:10:44.145 "data_size": 0 00:10:44.145 } 00:10:44.145 ] 00:10:44.145 }' 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.145 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.714 [2024-11-18 13:27:14.491819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.714 [2024-11-18 13:27:14.491888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.714 [2024-11-18 13:27:14.499832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.714 [2024-11-18 13:27:14.501596] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.714 [2024-11-18 13:27:14.501634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.714 [2024-11-18 13:27:14.501645] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.714 [2024-11-18 13:27:14.501656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.714 [2024-11-18 13:27:14.501663] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.714 [2024-11-18 13:27:14.501672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.714 "name": "Existed_Raid", 00:10:44.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.714 "strip_size_kb": 64, 00:10:44.714 "state": "configuring", 00:10:44.714 "raid_level": "raid0", 00:10:44.714 "superblock": false, 00:10:44.714 "num_base_bdevs": 4, 00:10:44.714 "num_base_bdevs_discovered": 1, 00:10:44.714 "num_base_bdevs_operational": 4, 00:10:44.714 "base_bdevs_list": [ 00:10:44.714 { 00:10:44.714 "name": "BaseBdev1", 00:10:44.714 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:44.714 "is_configured": true, 00:10:44.714 "data_offset": 0, 00:10:44.714 "data_size": 65536 00:10:44.714 }, 00:10:44.714 { 00:10:44.714 "name": "BaseBdev2", 00:10:44.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.714 "is_configured": false, 00:10:44.714 "data_offset": 0, 00:10:44.714 "data_size": 0 00:10:44.714 }, 00:10:44.714 { 00:10:44.714 "name": "BaseBdev3", 00:10:44.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.714 "is_configured": false, 00:10:44.714 "data_offset": 0, 00:10:44.714 "data_size": 0 00:10:44.714 }, 00:10:44.714 { 00:10:44.714 "name": "BaseBdev4", 00:10:44.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.714 "is_configured": false, 00:10:44.714 "data_offset": 0, 00:10:44.714 "data_size": 0 00:10:44.714 } 00:10:44.714 ] 00:10:44.714 }' 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.714 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 [2024-11-18 13:27:14.896479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.974 BaseBdev2 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 [ 00:10:44.974 { 00:10:44.974 "name": "BaseBdev2", 00:10:44.974 "aliases": [ 00:10:44.974 "76bbdc8f-8324-49ff-a39a-105406f18b17" 00:10:44.974 ], 00:10:44.974 "product_name": "Malloc disk", 00:10:44.974 "block_size": 512, 00:10:44.974 "num_blocks": 65536, 00:10:44.974 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:44.974 "assigned_rate_limits": { 00:10:44.974 "rw_ios_per_sec": 0, 00:10:44.974 "rw_mbytes_per_sec": 0, 00:10:44.974 "r_mbytes_per_sec": 0, 00:10:44.974 "w_mbytes_per_sec": 0 00:10:44.974 }, 00:10:44.974 "claimed": true, 00:10:44.974 "claim_type": "exclusive_write", 00:10:44.974 "zoned": false, 00:10:44.974 "supported_io_types": { 00:10:44.974 "read": true, 00:10:44.974 "write": true, 00:10:44.974 "unmap": true, 00:10:44.974 "flush": true, 00:10:44.974 "reset": true, 00:10:44.974 "nvme_admin": false, 00:10:44.974 "nvme_io": false, 00:10:44.974 "nvme_io_md": false, 00:10:44.974 "write_zeroes": true, 00:10:44.974 "zcopy": true, 00:10:44.974 "get_zone_info": false, 00:10:44.974 "zone_management": false, 00:10:44.974 "zone_append": false, 00:10:44.974 "compare": false, 00:10:44.974 "compare_and_write": false, 00:10:44.974 "abort": true, 00:10:44.974 "seek_hole": false, 00:10:44.974 "seek_data": false, 00:10:44.974 "copy": true, 00:10:44.974 "nvme_iov_md": false 00:10:44.974 }, 00:10:44.974 "memory_domains": [ 00:10:44.974 { 00:10:44.974 "dma_device_id": "system", 00:10:44.974 "dma_device_type": 1 00:10:44.974 }, 00:10:44.974 { 00:10:44.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.974 "dma_device_type": 2 00:10:44.974 } 00:10:44.974 ], 00:10:44.974 "driver_specific": {} 00:10:44.974 } 00:10:44.974 ] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.974 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.974 "name": "Existed_Raid", 00:10:44.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.974 "strip_size_kb": 64, 00:10:44.974 "state": "configuring", 00:10:44.974 "raid_level": "raid0", 00:10:44.974 "superblock": false, 00:10:44.974 "num_base_bdevs": 4, 00:10:44.974 "num_base_bdevs_discovered": 2, 00:10:44.974 "num_base_bdevs_operational": 4, 00:10:44.974 "base_bdevs_list": [ 00:10:44.974 { 00:10:44.974 "name": "BaseBdev1", 00:10:44.974 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:44.974 "is_configured": true, 00:10:44.974 "data_offset": 0, 00:10:44.974 "data_size": 65536 00:10:44.974 }, 00:10:44.974 { 00:10:44.974 "name": "BaseBdev2", 00:10:44.974 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:44.974 "is_configured": true, 00:10:44.974 "data_offset": 0, 00:10:44.974 "data_size": 65536 00:10:44.974 }, 00:10:44.975 { 00:10:44.975 "name": "BaseBdev3", 00:10:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.975 "is_configured": false, 00:10:44.975 "data_offset": 0, 00:10:44.975 "data_size": 0 00:10:44.975 }, 00:10:44.975 { 00:10:44.975 "name": "BaseBdev4", 00:10:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.975 "is_configured": false, 00:10:44.975 "data_offset": 0, 00:10:44.975 "data_size": 0 00:10:44.975 } 00:10:44.975 ] 00:10:44.975 }' 00:10:44.975 13:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.975 13:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.544 [2024-11-18 13:27:15.434114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.544 BaseBdev3 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.544 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.544 [ 00:10:45.544 { 00:10:45.544 "name": "BaseBdev3", 00:10:45.544 "aliases": [ 00:10:45.544 "0448dfbb-798e-46e2-b661-82a3b6188969" 00:10:45.544 ], 00:10:45.544 "product_name": "Malloc disk", 00:10:45.544 "block_size": 512, 00:10:45.544 "num_blocks": 65536, 00:10:45.544 "uuid": "0448dfbb-798e-46e2-b661-82a3b6188969", 00:10:45.544 "assigned_rate_limits": { 00:10:45.544 "rw_ios_per_sec": 0, 00:10:45.544 "rw_mbytes_per_sec": 0, 00:10:45.544 "r_mbytes_per_sec": 0, 00:10:45.544 "w_mbytes_per_sec": 0 00:10:45.544 }, 00:10:45.544 "claimed": true, 00:10:45.544 "claim_type": "exclusive_write", 00:10:45.544 "zoned": false, 00:10:45.544 "supported_io_types": { 00:10:45.544 "read": true, 00:10:45.544 "write": true, 00:10:45.544 "unmap": true, 00:10:45.544 "flush": true, 00:10:45.544 "reset": true, 00:10:45.544 "nvme_admin": false, 00:10:45.544 "nvme_io": false, 00:10:45.545 "nvme_io_md": false, 00:10:45.545 "write_zeroes": true, 00:10:45.545 "zcopy": true, 00:10:45.545 "get_zone_info": false, 00:10:45.545 "zone_management": false, 00:10:45.545 "zone_append": false, 00:10:45.545 "compare": false, 00:10:45.545 "compare_and_write": false, 00:10:45.545 "abort": true, 00:10:45.545 "seek_hole": false, 00:10:45.545 "seek_data": false, 00:10:45.545 "copy": true, 00:10:45.545 "nvme_iov_md": false 00:10:45.545 }, 00:10:45.545 "memory_domains": [ 00:10:45.545 { 00:10:45.545 "dma_device_id": "system", 00:10:45.545 "dma_device_type": 1 00:10:45.545 }, 00:10:45.545 { 00:10:45.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.545 "dma_device_type": 2 00:10:45.545 } 00:10:45.545 ], 00:10:45.545 "driver_specific": {} 00:10:45.545 } 00:10:45.545 ] 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.545 "name": "Existed_Raid", 00:10:45.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.545 "strip_size_kb": 64, 00:10:45.545 "state": "configuring", 00:10:45.545 "raid_level": "raid0", 00:10:45.545 "superblock": false, 00:10:45.545 "num_base_bdevs": 4, 00:10:45.545 "num_base_bdevs_discovered": 3, 00:10:45.545 "num_base_bdevs_operational": 4, 00:10:45.545 "base_bdevs_list": [ 00:10:45.545 { 00:10:45.545 "name": "BaseBdev1", 00:10:45.545 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:45.545 "is_configured": true, 00:10:45.545 "data_offset": 0, 00:10:45.545 "data_size": 65536 00:10:45.545 }, 00:10:45.545 { 00:10:45.545 "name": "BaseBdev2", 00:10:45.545 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:45.545 "is_configured": true, 00:10:45.545 "data_offset": 0, 00:10:45.545 "data_size": 65536 00:10:45.545 }, 00:10:45.545 { 00:10:45.545 "name": "BaseBdev3", 00:10:45.545 "uuid": "0448dfbb-798e-46e2-b661-82a3b6188969", 00:10:45.545 "is_configured": true, 00:10:45.545 "data_offset": 0, 00:10:45.545 "data_size": 65536 00:10:45.545 }, 00:10:45.545 { 00:10:45.545 "name": "BaseBdev4", 00:10:45.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.545 "is_configured": false, 00:10:45.545 "data_offset": 0, 00:10:45.545 "data_size": 0 00:10:45.545 } 00:10:45.545 ] 00:10:45.545 }' 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.545 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.115 [2024-11-18 13:27:15.904193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:46.115 [2024-11-18 13:27:15.904245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.115 [2024-11-18 13:27:15.904255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:46.115 [2024-11-18 13:27:15.904525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:46.115 [2024-11-18 13:27:15.904692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.115 [2024-11-18 13:27:15.904711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:46.115 [2024-11-18 13:27:15.904958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.115 BaseBdev4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.115 [ 00:10:46.115 { 00:10:46.115 "name": "BaseBdev4", 00:10:46.115 "aliases": [ 00:10:46.115 "9b730782-9422-4533-b509-3e53dbe9070c" 00:10:46.115 ], 00:10:46.115 "product_name": "Malloc disk", 00:10:46.115 "block_size": 512, 00:10:46.115 "num_blocks": 65536, 00:10:46.115 "uuid": "9b730782-9422-4533-b509-3e53dbe9070c", 00:10:46.115 "assigned_rate_limits": { 00:10:46.115 "rw_ios_per_sec": 0, 00:10:46.115 "rw_mbytes_per_sec": 0, 00:10:46.115 "r_mbytes_per_sec": 0, 00:10:46.115 "w_mbytes_per_sec": 0 00:10:46.115 }, 00:10:46.115 "claimed": true, 00:10:46.115 "claim_type": "exclusive_write", 00:10:46.115 "zoned": false, 00:10:46.115 "supported_io_types": { 00:10:46.115 "read": true, 00:10:46.115 "write": true, 00:10:46.115 "unmap": true, 00:10:46.115 "flush": true, 00:10:46.115 "reset": true, 00:10:46.115 "nvme_admin": false, 00:10:46.115 "nvme_io": false, 00:10:46.115 "nvme_io_md": false, 00:10:46.115 "write_zeroes": true, 00:10:46.115 "zcopy": true, 00:10:46.115 "get_zone_info": false, 00:10:46.115 "zone_management": false, 00:10:46.115 "zone_append": false, 00:10:46.115 "compare": false, 00:10:46.115 "compare_and_write": false, 00:10:46.115 "abort": true, 00:10:46.115 "seek_hole": false, 00:10:46.115 "seek_data": false, 00:10:46.115 "copy": true, 00:10:46.115 "nvme_iov_md": false 00:10:46.115 }, 00:10:46.115 "memory_domains": [ 00:10:46.115 { 00:10:46.115 "dma_device_id": "system", 00:10:46.115 "dma_device_type": 1 00:10:46.115 }, 00:10:46.115 { 00:10:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.115 "dma_device_type": 2 00:10:46.115 } 00:10:46.115 ], 00:10:46.115 "driver_specific": {} 00:10:46.115 } 00:10:46.115 ] 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.115 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.116 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.116 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.116 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.116 "name": "Existed_Raid", 00:10:46.116 "uuid": "bb0a13d1-3699-49ec-ba4e-41403f2e356c", 00:10:46.116 "strip_size_kb": 64, 00:10:46.116 "state": "online", 00:10:46.116 "raid_level": "raid0", 00:10:46.116 "superblock": false, 00:10:46.116 "num_base_bdevs": 4, 00:10:46.116 "num_base_bdevs_discovered": 4, 00:10:46.116 "num_base_bdevs_operational": 4, 00:10:46.116 "base_bdevs_list": [ 00:10:46.116 { 00:10:46.116 "name": "BaseBdev1", 00:10:46.116 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:46.116 "is_configured": true, 00:10:46.116 "data_offset": 0, 00:10:46.116 "data_size": 65536 00:10:46.116 }, 00:10:46.116 { 00:10:46.116 "name": "BaseBdev2", 00:10:46.116 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:46.116 "is_configured": true, 00:10:46.116 "data_offset": 0, 00:10:46.116 "data_size": 65536 00:10:46.116 }, 00:10:46.116 { 00:10:46.116 "name": "BaseBdev3", 00:10:46.116 "uuid": "0448dfbb-798e-46e2-b661-82a3b6188969", 00:10:46.116 "is_configured": true, 00:10:46.116 "data_offset": 0, 00:10:46.116 "data_size": 65536 00:10:46.116 }, 00:10:46.116 { 00:10:46.116 "name": "BaseBdev4", 00:10:46.116 "uuid": "9b730782-9422-4533-b509-3e53dbe9070c", 00:10:46.116 "is_configured": true, 00:10:46.116 "data_offset": 0, 00:10:46.116 "data_size": 65536 00:10:46.116 } 00:10:46.116 ] 00:10:46.116 }' 00:10:46.116 13:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.116 13:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.377 [2024-11-18 13:27:16.391745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.377 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.637 "name": "Existed_Raid", 00:10:46.637 "aliases": [ 00:10:46.637 "bb0a13d1-3699-49ec-ba4e-41403f2e356c" 00:10:46.637 ], 00:10:46.637 "product_name": "Raid Volume", 00:10:46.637 "block_size": 512, 00:10:46.637 "num_blocks": 262144, 00:10:46.637 "uuid": "bb0a13d1-3699-49ec-ba4e-41403f2e356c", 00:10:46.637 "assigned_rate_limits": { 00:10:46.637 "rw_ios_per_sec": 0, 00:10:46.637 "rw_mbytes_per_sec": 0, 00:10:46.637 "r_mbytes_per_sec": 0, 00:10:46.637 "w_mbytes_per_sec": 0 00:10:46.637 }, 00:10:46.637 "claimed": false, 00:10:46.637 "zoned": false, 00:10:46.637 "supported_io_types": { 00:10:46.637 "read": true, 00:10:46.637 "write": true, 00:10:46.637 "unmap": true, 00:10:46.637 "flush": true, 00:10:46.637 "reset": true, 00:10:46.637 "nvme_admin": false, 00:10:46.637 "nvme_io": false, 00:10:46.637 "nvme_io_md": false, 00:10:46.637 "write_zeroes": true, 00:10:46.637 "zcopy": false, 00:10:46.637 "get_zone_info": false, 00:10:46.637 "zone_management": false, 00:10:46.637 "zone_append": false, 00:10:46.637 "compare": false, 00:10:46.637 "compare_and_write": false, 00:10:46.637 "abort": false, 00:10:46.637 "seek_hole": false, 00:10:46.637 "seek_data": false, 00:10:46.637 "copy": false, 00:10:46.637 "nvme_iov_md": false 00:10:46.637 }, 00:10:46.637 "memory_domains": [ 00:10:46.637 { 00:10:46.637 "dma_device_id": "system", 00:10:46.637 "dma_device_type": 1 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.637 "dma_device_type": 2 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "system", 00:10:46.637 "dma_device_type": 1 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.637 "dma_device_type": 2 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "system", 00:10:46.637 "dma_device_type": 1 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.637 "dma_device_type": 2 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "system", 00:10:46.637 "dma_device_type": 1 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.637 "dma_device_type": 2 00:10:46.637 } 00:10:46.637 ], 00:10:46.637 "driver_specific": { 00:10:46.637 "raid": { 00:10:46.637 "uuid": "bb0a13d1-3699-49ec-ba4e-41403f2e356c", 00:10:46.637 "strip_size_kb": 64, 00:10:46.637 "state": "online", 00:10:46.637 "raid_level": "raid0", 00:10:46.637 "superblock": false, 00:10:46.637 "num_base_bdevs": 4, 00:10:46.637 "num_base_bdevs_discovered": 4, 00:10:46.637 "num_base_bdevs_operational": 4, 00:10:46.637 "base_bdevs_list": [ 00:10:46.637 { 00:10:46.637 "name": "BaseBdev1", 00:10:46.637 "uuid": "7c8bc9f4-c492-4ab9-a531-fdbfca08ca7c", 00:10:46.637 "is_configured": true, 00:10:46.637 "data_offset": 0, 00:10:46.637 "data_size": 65536 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "name": "BaseBdev2", 00:10:46.637 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:46.637 "is_configured": true, 00:10:46.637 "data_offset": 0, 00:10:46.637 "data_size": 65536 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "name": "BaseBdev3", 00:10:46.637 "uuid": "0448dfbb-798e-46e2-b661-82a3b6188969", 00:10:46.637 "is_configured": true, 00:10:46.637 "data_offset": 0, 00:10:46.637 "data_size": 65536 00:10:46.637 }, 00:10:46.637 { 00:10:46.637 "name": "BaseBdev4", 00:10:46.637 "uuid": "9b730782-9422-4533-b509-3e53dbe9070c", 00:10:46.637 "is_configured": true, 00:10:46.637 "data_offset": 0, 00:10:46.637 "data_size": 65536 00:10:46.637 } 00:10:46.637 ] 00:10:46.637 } 00:10:46.637 } 00:10:46.637 }' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:46.637 BaseBdev2 00:10:46.637 BaseBdev3 00:10:46.637 BaseBdev4' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.637 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.638 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.897 [2024-11-18 13:27:16.690924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.897 [2024-11-18 13:27:16.690958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.897 [2024-11-18 13:27:16.691009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.897 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.897 "name": "Existed_Raid", 00:10:46.897 "uuid": "bb0a13d1-3699-49ec-ba4e-41403f2e356c", 00:10:46.897 "strip_size_kb": 64, 00:10:46.897 "state": "offline", 00:10:46.897 "raid_level": "raid0", 00:10:46.897 "superblock": false, 00:10:46.897 "num_base_bdevs": 4, 00:10:46.897 "num_base_bdevs_discovered": 3, 00:10:46.897 "num_base_bdevs_operational": 3, 00:10:46.897 "base_bdevs_list": [ 00:10:46.897 { 00:10:46.897 "name": null, 00:10:46.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.897 "is_configured": false, 00:10:46.897 "data_offset": 0, 00:10:46.897 "data_size": 65536 00:10:46.897 }, 00:10:46.897 { 00:10:46.897 "name": "BaseBdev2", 00:10:46.897 "uuid": "76bbdc8f-8324-49ff-a39a-105406f18b17", 00:10:46.897 "is_configured": true, 00:10:46.897 "data_offset": 0, 00:10:46.897 "data_size": 65536 00:10:46.897 }, 00:10:46.897 { 00:10:46.897 "name": "BaseBdev3", 00:10:46.897 "uuid": "0448dfbb-798e-46e2-b661-82a3b6188969", 00:10:46.897 "is_configured": true, 00:10:46.897 "data_offset": 0, 00:10:46.897 "data_size": 65536 00:10:46.897 }, 00:10:46.897 { 00:10:46.897 "name": "BaseBdev4", 00:10:46.897 "uuid": "9b730782-9422-4533-b509-3e53dbe9070c", 00:10:46.897 "is_configured": true, 00:10:46.897 "data_offset": 0, 00:10:46.897 "data_size": 65536 00:10:46.898 } 00:10:46.898 ] 00:10:46.898 }' 00:10:46.898 13:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.898 13:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.468 [2024-11-18 13:27:17.293524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.468 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.468 [2024-11-18 13:27:17.445934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 [2024-11-18 13:27:17.595299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:47.727 [2024-11-18 13:27:17.595350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 BaseBdev2 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 [ 00:10:47.987 { 00:10:47.987 "name": "BaseBdev2", 00:10:47.987 "aliases": [ 00:10:47.987 "5e7e7321-74fd-4013-8800-8dde09e36a43" 00:10:47.987 ], 00:10:47.987 "product_name": "Malloc disk", 00:10:47.987 "block_size": 512, 00:10:47.987 "num_blocks": 65536, 00:10:47.987 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:47.987 "assigned_rate_limits": { 00:10:47.987 "rw_ios_per_sec": 0, 00:10:47.987 "rw_mbytes_per_sec": 0, 00:10:47.987 "r_mbytes_per_sec": 0, 00:10:47.987 "w_mbytes_per_sec": 0 00:10:47.987 }, 00:10:47.987 "claimed": false, 00:10:47.987 "zoned": false, 00:10:47.987 "supported_io_types": { 00:10:47.987 "read": true, 00:10:47.987 "write": true, 00:10:47.987 "unmap": true, 00:10:47.987 "flush": true, 00:10:47.987 "reset": true, 00:10:47.987 "nvme_admin": false, 00:10:47.987 "nvme_io": false, 00:10:47.987 "nvme_io_md": false, 00:10:47.987 "write_zeroes": true, 00:10:47.987 "zcopy": true, 00:10:47.987 "get_zone_info": false, 00:10:47.987 "zone_management": false, 00:10:47.987 "zone_append": false, 00:10:47.987 "compare": false, 00:10:47.987 "compare_and_write": false, 00:10:47.987 "abort": true, 00:10:47.987 "seek_hole": false, 00:10:47.987 "seek_data": false, 00:10:47.987 "copy": true, 00:10:47.987 "nvme_iov_md": false 00:10:47.987 }, 00:10:47.987 "memory_domains": [ 00:10:47.987 { 00:10:47.987 "dma_device_id": "system", 00:10:47.987 "dma_device_type": 1 00:10:47.987 }, 00:10:47.987 { 00:10:47.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.987 "dma_device_type": 2 00:10:47.987 } 00:10:47.987 ], 00:10:47.987 "driver_specific": {} 00:10:47.987 } 00:10:47.987 ] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 BaseBdev3 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 [ 00:10:47.987 { 00:10:47.987 "name": "BaseBdev3", 00:10:47.987 "aliases": [ 00:10:47.987 "30404c2b-b9e8-46af-ad89-00399ed4ea06" 00:10:47.987 ], 00:10:47.988 "product_name": "Malloc disk", 00:10:47.988 "block_size": 512, 00:10:47.988 "num_blocks": 65536, 00:10:47.988 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:47.988 "assigned_rate_limits": { 00:10:47.988 "rw_ios_per_sec": 0, 00:10:47.988 "rw_mbytes_per_sec": 0, 00:10:47.988 "r_mbytes_per_sec": 0, 00:10:47.988 "w_mbytes_per_sec": 0 00:10:47.988 }, 00:10:47.988 "claimed": false, 00:10:47.988 "zoned": false, 00:10:47.988 "supported_io_types": { 00:10:47.988 "read": true, 00:10:47.988 "write": true, 00:10:47.988 "unmap": true, 00:10:47.988 "flush": true, 00:10:47.988 "reset": true, 00:10:47.988 "nvme_admin": false, 00:10:47.988 "nvme_io": false, 00:10:47.988 "nvme_io_md": false, 00:10:47.988 "write_zeroes": true, 00:10:47.988 "zcopy": true, 00:10:47.988 "get_zone_info": false, 00:10:47.988 "zone_management": false, 00:10:47.988 "zone_append": false, 00:10:47.988 "compare": false, 00:10:47.988 "compare_and_write": false, 00:10:47.988 "abort": true, 00:10:47.988 "seek_hole": false, 00:10:47.988 "seek_data": false, 00:10:47.988 "copy": true, 00:10:47.988 "nvme_iov_md": false 00:10:47.988 }, 00:10:47.988 "memory_domains": [ 00:10:47.988 { 00:10:47.988 "dma_device_id": "system", 00:10:47.988 "dma_device_type": 1 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.988 "dma_device_type": 2 00:10:47.988 } 00:10:47.988 ], 00:10:47.988 "driver_specific": {} 00:10:47.988 } 00:10:47.988 ] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 BaseBdev4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 [ 00:10:47.988 { 00:10:47.988 "name": "BaseBdev4", 00:10:47.988 "aliases": [ 00:10:47.988 "347cf956-3d05-4362-ae30-f6a23fb7eb5c" 00:10:47.988 ], 00:10:47.988 "product_name": "Malloc disk", 00:10:47.988 "block_size": 512, 00:10:47.988 "num_blocks": 65536, 00:10:47.988 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:47.988 "assigned_rate_limits": { 00:10:47.988 "rw_ios_per_sec": 0, 00:10:47.988 "rw_mbytes_per_sec": 0, 00:10:47.988 "r_mbytes_per_sec": 0, 00:10:47.988 "w_mbytes_per_sec": 0 00:10:47.988 }, 00:10:47.988 "claimed": false, 00:10:47.988 "zoned": false, 00:10:47.988 "supported_io_types": { 00:10:47.988 "read": true, 00:10:47.988 "write": true, 00:10:47.988 "unmap": true, 00:10:47.988 "flush": true, 00:10:47.988 "reset": true, 00:10:47.988 "nvme_admin": false, 00:10:47.988 "nvme_io": false, 00:10:47.988 "nvme_io_md": false, 00:10:47.988 "write_zeroes": true, 00:10:47.988 "zcopy": true, 00:10:47.988 "get_zone_info": false, 00:10:47.988 "zone_management": false, 00:10:47.988 "zone_append": false, 00:10:47.988 "compare": false, 00:10:47.988 "compare_and_write": false, 00:10:47.988 "abort": true, 00:10:47.988 "seek_hole": false, 00:10:47.988 "seek_data": false, 00:10:47.988 "copy": true, 00:10:47.988 "nvme_iov_md": false 00:10:47.988 }, 00:10:47.988 "memory_domains": [ 00:10:47.988 { 00:10:47.988 "dma_device_id": "system", 00:10:47.988 "dma_device_type": 1 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.988 "dma_device_type": 2 00:10:47.988 } 00:10:47.988 ], 00:10:47.988 "driver_specific": {} 00:10:47.988 } 00:10:47.988 ] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 [2024-11-18 13:27:17.971868] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.988 [2024-11-18 13:27:17.972007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.988 [2024-11-18 13:27:17.972049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.988 [2024-11-18 13:27:17.973788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.988 [2024-11-18 13:27:17.973877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 13:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.988 "name": "Existed_Raid", 00:10:47.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.988 "strip_size_kb": 64, 00:10:47.988 "state": "configuring", 00:10:47.988 "raid_level": "raid0", 00:10:47.988 "superblock": false, 00:10:47.988 "num_base_bdevs": 4, 00:10:47.988 "num_base_bdevs_discovered": 3, 00:10:47.988 "num_base_bdevs_operational": 4, 00:10:47.989 "base_bdevs_list": [ 00:10:47.989 { 00:10:47.989 "name": "BaseBdev1", 00:10:47.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.989 "is_configured": false, 00:10:47.989 "data_offset": 0, 00:10:47.989 "data_size": 0 00:10:47.989 }, 00:10:47.989 { 00:10:47.989 "name": "BaseBdev2", 00:10:47.989 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:47.989 "is_configured": true, 00:10:47.989 "data_offset": 0, 00:10:47.989 "data_size": 65536 00:10:47.989 }, 00:10:47.989 { 00:10:47.989 "name": "BaseBdev3", 00:10:47.989 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:47.989 "is_configured": true, 00:10:47.989 "data_offset": 0, 00:10:47.989 "data_size": 65536 00:10:47.989 }, 00:10:47.989 { 00:10:47.989 "name": "BaseBdev4", 00:10:47.989 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:47.989 "is_configured": true, 00:10:47.989 "data_offset": 0, 00:10:47.989 "data_size": 65536 00:10:47.989 } 00:10:47.989 ] 00:10:47.989 }' 00:10:47.989 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.989 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 [2024-11-18 13:27:18.419139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.558 "name": "Existed_Raid", 00:10:48.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.558 "strip_size_kb": 64, 00:10:48.558 "state": "configuring", 00:10:48.558 "raid_level": "raid0", 00:10:48.558 "superblock": false, 00:10:48.558 "num_base_bdevs": 4, 00:10:48.558 "num_base_bdevs_discovered": 2, 00:10:48.558 "num_base_bdevs_operational": 4, 00:10:48.558 "base_bdevs_list": [ 00:10:48.558 { 00:10:48.558 "name": "BaseBdev1", 00:10:48.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.558 "is_configured": false, 00:10:48.558 "data_offset": 0, 00:10:48.558 "data_size": 0 00:10:48.558 }, 00:10:48.558 { 00:10:48.558 "name": null, 00:10:48.558 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:48.558 "is_configured": false, 00:10:48.558 "data_offset": 0, 00:10:48.558 "data_size": 65536 00:10:48.558 }, 00:10:48.558 { 00:10:48.558 "name": "BaseBdev3", 00:10:48.558 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:48.558 "is_configured": true, 00:10:48.558 "data_offset": 0, 00:10:48.558 "data_size": 65536 00:10:48.558 }, 00:10:48.558 { 00:10:48.558 "name": "BaseBdev4", 00:10:48.558 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:48.558 "is_configured": true, 00:10:48.558 "data_offset": 0, 00:10:48.558 "data_size": 65536 00:10:48.558 } 00:10:48.558 ] 00:10:48.558 }' 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.558 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.817 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.817 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.817 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.817 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 [2024-11-18 13:27:18.939989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.078 BaseBdev1 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 [ 00:10:49.078 { 00:10:49.078 "name": "BaseBdev1", 00:10:49.078 "aliases": [ 00:10:49.078 "4d092b73-d3c2-439c-bac8-9b31f78509d2" 00:10:49.078 ], 00:10:49.078 "product_name": "Malloc disk", 00:10:49.078 "block_size": 512, 00:10:49.078 "num_blocks": 65536, 00:10:49.078 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:49.078 "assigned_rate_limits": { 00:10:49.078 "rw_ios_per_sec": 0, 00:10:49.078 "rw_mbytes_per_sec": 0, 00:10:49.078 "r_mbytes_per_sec": 0, 00:10:49.078 "w_mbytes_per_sec": 0 00:10:49.078 }, 00:10:49.078 "claimed": true, 00:10:49.078 "claim_type": "exclusive_write", 00:10:49.078 "zoned": false, 00:10:49.078 "supported_io_types": { 00:10:49.078 "read": true, 00:10:49.078 "write": true, 00:10:49.078 "unmap": true, 00:10:49.078 "flush": true, 00:10:49.078 "reset": true, 00:10:49.078 "nvme_admin": false, 00:10:49.078 "nvme_io": false, 00:10:49.078 "nvme_io_md": false, 00:10:49.078 "write_zeroes": true, 00:10:49.078 "zcopy": true, 00:10:49.078 "get_zone_info": false, 00:10:49.078 "zone_management": false, 00:10:49.078 "zone_append": false, 00:10:49.078 "compare": false, 00:10:49.078 "compare_and_write": false, 00:10:49.078 "abort": true, 00:10:49.078 "seek_hole": false, 00:10:49.078 "seek_data": false, 00:10:49.078 "copy": true, 00:10:49.078 "nvme_iov_md": false 00:10:49.078 }, 00:10:49.078 "memory_domains": [ 00:10:49.078 { 00:10:49.078 "dma_device_id": "system", 00:10:49.078 "dma_device_type": 1 00:10:49.078 }, 00:10:49.078 { 00:10:49.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.078 "dma_device_type": 2 00:10:49.078 } 00:10:49.078 ], 00:10:49.078 "driver_specific": {} 00:10:49.078 } 00:10:49.078 ] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 13:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.078 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.078 "name": "Existed_Raid", 00:10:49.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.078 "strip_size_kb": 64, 00:10:49.078 "state": "configuring", 00:10:49.078 "raid_level": "raid0", 00:10:49.078 "superblock": false, 00:10:49.078 "num_base_bdevs": 4, 00:10:49.078 "num_base_bdevs_discovered": 3, 00:10:49.078 "num_base_bdevs_operational": 4, 00:10:49.078 "base_bdevs_list": [ 00:10:49.078 { 00:10:49.078 "name": "BaseBdev1", 00:10:49.078 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:49.078 "is_configured": true, 00:10:49.078 "data_offset": 0, 00:10:49.078 "data_size": 65536 00:10:49.078 }, 00:10:49.078 { 00:10:49.078 "name": null, 00:10:49.078 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:49.078 "is_configured": false, 00:10:49.078 "data_offset": 0, 00:10:49.078 "data_size": 65536 00:10:49.078 }, 00:10:49.078 { 00:10:49.078 "name": "BaseBdev3", 00:10:49.078 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:49.078 "is_configured": true, 00:10:49.078 "data_offset": 0, 00:10:49.078 "data_size": 65536 00:10:49.078 }, 00:10:49.078 { 00:10:49.078 "name": "BaseBdev4", 00:10:49.078 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:49.078 "is_configured": true, 00:10:49.078 "data_offset": 0, 00:10:49.078 "data_size": 65536 00:10:49.078 } 00:10:49.078 ] 00:10:49.078 }' 00:10:49.078 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.078 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.646 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.647 [2024-11-18 13:27:19.487166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.647 "name": "Existed_Raid", 00:10:49.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.647 "strip_size_kb": 64, 00:10:49.647 "state": "configuring", 00:10:49.647 "raid_level": "raid0", 00:10:49.647 "superblock": false, 00:10:49.647 "num_base_bdevs": 4, 00:10:49.647 "num_base_bdevs_discovered": 2, 00:10:49.647 "num_base_bdevs_operational": 4, 00:10:49.647 "base_bdevs_list": [ 00:10:49.647 { 00:10:49.647 "name": "BaseBdev1", 00:10:49.647 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:49.647 "is_configured": true, 00:10:49.647 "data_offset": 0, 00:10:49.647 "data_size": 65536 00:10:49.647 }, 00:10:49.647 { 00:10:49.647 "name": null, 00:10:49.647 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:49.647 "is_configured": false, 00:10:49.647 "data_offset": 0, 00:10:49.647 "data_size": 65536 00:10:49.647 }, 00:10:49.647 { 00:10:49.647 "name": null, 00:10:49.647 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:49.647 "is_configured": false, 00:10:49.647 "data_offset": 0, 00:10:49.647 "data_size": 65536 00:10:49.647 }, 00:10:49.647 { 00:10:49.647 "name": "BaseBdev4", 00:10:49.647 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:49.647 "is_configured": true, 00:10:49.647 "data_offset": 0, 00:10:49.647 "data_size": 65536 00:10:49.647 } 00:10:49.647 ] 00:10:49.647 }' 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.647 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.958 [2024-11-18 13:27:19.970340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.958 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.220 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.220 13:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.220 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.220 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.220 13:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.220 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.220 "name": "Existed_Raid", 00:10:50.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.220 "strip_size_kb": 64, 00:10:50.220 "state": "configuring", 00:10:50.220 "raid_level": "raid0", 00:10:50.220 "superblock": false, 00:10:50.220 "num_base_bdevs": 4, 00:10:50.220 "num_base_bdevs_discovered": 3, 00:10:50.220 "num_base_bdevs_operational": 4, 00:10:50.220 "base_bdevs_list": [ 00:10:50.220 { 00:10:50.220 "name": "BaseBdev1", 00:10:50.220 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:50.220 "is_configured": true, 00:10:50.220 "data_offset": 0, 00:10:50.220 "data_size": 65536 00:10:50.221 }, 00:10:50.221 { 00:10:50.221 "name": null, 00:10:50.221 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:50.221 "is_configured": false, 00:10:50.221 "data_offset": 0, 00:10:50.221 "data_size": 65536 00:10:50.221 }, 00:10:50.221 { 00:10:50.221 "name": "BaseBdev3", 00:10:50.221 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:50.221 "is_configured": true, 00:10:50.221 "data_offset": 0, 00:10:50.221 "data_size": 65536 00:10:50.221 }, 00:10:50.221 { 00:10:50.221 "name": "BaseBdev4", 00:10:50.221 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:50.221 "is_configured": true, 00:10:50.221 "data_offset": 0, 00:10:50.221 "data_size": 65536 00:10:50.221 } 00:10:50.221 ] 00:10:50.221 }' 00:10:50.221 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.221 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.481 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.481 [2024-11-18 13:27:20.489453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.742 "name": "Existed_Raid", 00:10:50.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.742 "strip_size_kb": 64, 00:10:50.742 "state": "configuring", 00:10:50.742 "raid_level": "raid0", 00:10:50.742 "superblock": false, 00:10:50.742 "num_base_bdevs": 4, 00:10:50.742 "num_base_bdevs_discovered": 2, 00:10:50.742 "num_base_bdevs_operational": 4, 00:10:50.742 "base_bdevs_list": [ 00:10:50.742 { 00:10:50.742 "name": null, 00:10:50.742 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:50.742 "is_configured": false, 00:10:50.742 "data_offset": 0, 00:10:50.742 "data_size": 65536 00:10:50.742 }, 00:10:50.742 { 00:10:50.742 "name": null, 00:10:50.742 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:50.742 "is_configured": false, 00:10:50.742 "data_offset": 0, 00:10:50.742 "data_size": 65536 00:10:50.742 }, 00:10:50.742 { 00:10:50.742 "name": "BaseBdev3", 00:10:50.742 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:50.742 "is_configured": true, 00:10:50.742 "data_offset": 0, 00:10:50.742 "data_size": 65536 00:10:50.742 }, 00:10:50.742 { 00:10:50.742 "name": "BaseBdev4", 00:10:50.742 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:50.742 "is_configured": true, 00:10:50.742 "data_offset": 0, 00:10:50.742 "data_size": 65536 00:10:50.742 } 00:10:50.742 ] 00:10:50.742 }' 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.742 13:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.310 [2024-11-18 13:27:21.119832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.310 "name": "Existed_Raid", 00:10:51.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.310 "strip_size_kb": 64, 00:10:51.310 "state": "configuring", 00:10:51.310 "raid_level": "raid0", 00:10:51.310 "superblock": false, 00:10:51.310 "num_base_bdevs": 4, 00:10:51.310 "num_base_bdevs_discovered": 3, 00:10:51.310 "num_base_bdevs_operational": 4, 00:10:51.310 "base_bdevs_list": [ 00:10:51.310 { 00:10:51.310 "name": null, 00:10:51.310 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:51.310 "is_configured": false, 00:10:51.310 "data_offset": 0, 00:10:51.310 "data_size": 65536 00:10:51.310 }, 00:10:51.310 { 00:10:51.310 "name": "BaseBdev2", 00:10:51.310 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:51.310 "is_configured": true, 00:10:51.310 "data_offset": 0, 00:10:51.310 "data_size": 65536 00:10:51.310 }, 00:10:51.310 { 00:10:51.310 "name": "BaseBdev3", 00:10:51.310 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:51.310 "is_configured": true, 00:10:51.310 "data_offset": 0, 00:10:51.310 "data_size": 65536 00:10:51.310 }, 00:10:51.310 { 00:10:51.310 "name": "BaseBdev4", 00:10:51.310 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:51.310 "is_configured": true, 00:10:51.310 "data_offset": 0, 00:10:51.310 "data_size": 65536 00:10:51.310 } 00:10:51.310 ] 00:10:51.310 }' 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.310 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.568 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.568 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.568 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.568 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.568 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4d092b73-d3c2-439c-bac8-9b31f78509d2 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.827 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.827 [2024-11-18 13:27:21.722737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:51.827 [2024-11-18 13:27:21.722875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.827 [2024-11-18 13:27:21.722900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:51.827 [2024-11-18 13:27:21.723197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:51.827 [2024-11-18 13:27:21.723392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.827 [2024-11-18 13:27:21.723409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:51.828 [2024-11-18 13:27:21.723647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.828 NewBaseBdev 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 [ 00:10:51.828 { 00:10:51.828 "name": "NewBaseBdev", 00:10:51.828 "aliases": [ 00:10:51.828 "4d092b73-d3c2-439c-bac8-9b31f78509d2" 00:10:51.828 ], 00:10:51.828 "product_name": "Malloc disk", 00:10:51.828 "block_size": 512, 00:10:51.828 "num_blocks": 65536, 00:10:51.828 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:51.828 "assigned_rate_limits": { 00:10:51.828 "rw_ios_per_sec": 0, 00:10:51.828 "rw_mbytes_per_sec": 0, 00:10:51.828 "r_mbytes_per_sec": 0, 00:10:51.828 "w_mbytes_per_sec": 0 00:10:51.828 }, 00:10:51.828 "claimed": true, 00:10:51.828 "claim_type": "exclusive_write", 00:10:51.828 "zoned": false, 00:10:51.828 "supported_io_types": { 00:10:51.828 "read": true, 00:10:51.828 "write": true, 00:10:51.828 "unmap": true, 00:10:51.828 "flush": true, 00:10:51.828 "reset": true, 00:10:51.828 "nvme_admin": false, 00:10:51.828 "nvme_io": false, 00:10:51.828 "nvme_io_md": false, 00:10:51.828 "write_zeroes": true, 00:10:51.828 "zcopy": true, 00:10:51.828 "get_zone_info": false, 00:10:51.828 "zone_management": false, 00:10:51.828 "zone_append": false, 00:10:51.828 "compare": false, 00:10:51.828 "compare_and_write": false, 00:10:51.828 "abort": true, 00:10:51.828 "seek_hole": false, 00:10:51.828 "seek_data": false, 00:10:51.828 "copy": true, 00:10:51.828 "nvme_iov_md": false 00:10:51.828 }, 00:10:51.828 "memory_domains": [ 00:10:51.828 { 00:10:51.828 "dma_device_id": "system", 00:10:51.828 "dma_device_type": 1 00:10:51.828 }, 00:10:51.828 { 00:10:51.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.828 "dma_device_type": 2 00:10:51.828 } 00:10:51.828 ], 00:10:51.828 "driver_specific": {} 00:10:51.828 } 00:10:51.828 ] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.828 "name": "Existed_Raid", 00:10:51.828 "uuid": "9f1a8d7e-58e6-49a9-b8ac-e2c53794c61d", 00:10:51.828 "strip_size_kb": 64, 00:10:51.828 "state": "online", 00:10:51.828 "raid_level": "raid0", 00:10:51.828 "superblock": false, 00:10:51.828 "num_base_bdevs": 4, 00:10:51.828 "num_base_bdevs_discovered": 4, 00:10:51.828 "num_base_bdevs_operational": 4, 00:10:51.828 "base_bdevs_list": [ 00:10:51.828 { 00:10:51.828 "name": "NewBaseBdev", 00:10:51.828 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:51.828 "is_configured": true, 00:10:51.828 "data_offset": 0, 00:10:51.828 "data_size": 65536 00:10:51.828 }, 00:10:51.828 { 00:10:51.828 "name": "BaseBdev2", 00:10:51.828 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:51.828 "is_configured": true, 00:10:51.828 "data_offset": 0, 00:10:51.828 "data_size": 65536 00:10:51.828 }, 00:10:51.828 { 00:10:51.828 "name": "BaseBdev3", 00:10:51.828 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:51.828 "is_configured": true, 00:10:51.828 "data_offset": 0, 00:10:51.828 "data_size": 65536 00:10:51.828 }, 00:10:51.828 { 00:10:51.828 "name": "BaseBdev4", 00:10:51.828 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:51.828 "is_configured": true, 00:10:51.828 "data_offset": 0, 00:10:51.828 "data_size": 65536 00:10:51.828 } 00:10:51.828 ] 00:10:51.828 }' 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.828 13:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.397 [2024-11-18 13:27:22.242396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.397 "name": "Existed_Raid", 00:10:52.397 "aliases": [ 00:10:52.397 "9f1a8d7e-58e6-49a9-b8ac-e2c53794c61d" 00:10:52.397 ], 00:10:52.397 "product_name": "Raid Volume", 00:10:52.397 "block_size": 512, 00:10:52.397 "num_blocks": 262144, 00:10:52.397 "uuid": "9f1a8d7e-58e6-49a9-b8ac-e2c53794c61d", 00:10:52.397 "assigned_rate_limits": { 00:10:52.397 "rw_ios_per_sec": 0, 00:10:52.397 "rw_mbytes_per_sec": 0, 00:10:52.397 "r_mbytes_per_sec": 0, 00:10:52.397 "w_mbytes_per_sec": 0 00:10:52.397 }, 00:10:52.397 "claimed": false, 00:10:52.397 "zoned": false, 00:10:52.397 "supported_io_types": { 00:10:52.397 "read": true, 00:10:52.397 "write": true, 00:10:52.397 "unmap": true, 00:10:52.397 "flush": true, 00:10:52.397 "reset": true, 00:10:52.397 "nvme_admin": false, 00:10:52.397 "nvme_io": false, 00:10:52.397 "nvme_io_md": false, 00:10:52.397 "write_zeroes": true, 00:10:52.397 "zcopy": false, 00:10:52.397 "get_zone_info": false, 00:10:52.397 "zone_management": false, 00:10:52.397 "zone_append": false, 00:10:52.397 "compare": false, 00:10:52.397 "compare_and_write": false, 00:10:52.397 "abort": false, 00:10:52.397 "seek_hole": false, 00:10:52.397 "seek_data": false, 00:10:52.397 "copy": false, 00:10:52.397 "nvme_iov_md": false 00:10:52.397 }, 00:10:52.397 "memory_domains": [ 00:10:52.397 { 00:10:52.397 "dma_device_id": "system", 00:10:52.397 "dma_device_type": 1 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.397 "dma_device_type": 2 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "system", 00:10:52.397 "dma_device_type": 1 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.397 "dma_device_type": 2 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "system", 00:10:52.397 "dma_device_type": 1 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.397 "dma_device_type": 2 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "system", 00:10:52.397 "dma_device_type": 1 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.397 "dma_device_type": 2 00:10:52.397 } 00:10:52.397 ], 00:10:52.397 "driver_specific": { 00:10:52.397 "raid": { 00:10:52.397 "uuid": "9f1a8d7e-58e6-49a9-b8ac-e2c53794c61d", 00:10:52.397 "strip_size_kb": 64, 00:10:52.397 "state": "online", 00:10:52.397 "raid_level": "raid0", 00:10:52.397 "superblock": false, 00:10:52.397 "num_base_bdevs": 4, 00:10:52.397 "num_base_bdevs_discovered": 4, 00:10:52.397 "num_base_bdevs_operational": 4, 00:10:52.397 "base_bdevs_list": [ 00:10:52.397 { 00:10:52.397 "name": "NewBaseBdev", 00:10:52.397 "uuid": "4d092b73-d3c2-439c-bac8-9b31f78509d2", 00:10:52.397 "is_configured": true, 00:10:52.397 "data_offset": 0, 00:10:52.397 "data_size": 65536 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "name": "BaseBdev2", 00:10:52.397 "uuid": "5e7e7321-74fd-4013-8800-8dde09e36a43", 00:10:52.397 "is_configured": true, 00:10:52.397 "data_offset": 0, 00:10:52.397 "data_size": 65536 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "name": "BaseBdev3", 00:10:52.397 "uuid": "30404c2b-b9e8-46af-ad89-00399ed4ea06", 00:10:52.397 "is_configured": true, 00:10:52.397 "data_offset": 0, 00:10:52.397 "data_size": 65536 00:10:52.397 }, 00:10:52.397 { 00:10:52.397 "name": "BaseBdev4", 00:10:52.397 "uuid": "347cf956-3d05-4362-ae30-f6a23fb7eb5c", 00:10:52.397 "is_configured": true, 00:10:52.397 "data_offset": 0, 00:10:52.397 "data_size": 65536 00:10:52.397 } 00:10:52.397 ] 00:10:52.397 } 00:10:52.397 } 00:10:52.397 }' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:52.397 BaseBdev2 00:10:52.397 BaseBdev3 00:10:52.397 BaseBdev4' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.397 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.658 [2024-11-18 13:27:22.569439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.658 [2024-11-18 13:27:22.569481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.658 [2024-11-18 13:27:22.569571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.658 [2024-11-18 13:27:22.569640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.658 [2024-11-18 13:27:22.569651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69384 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69384 ']' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69384 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69384 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69384' 00:10:52.658 killing process with pid 69384 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69384 00:10:52.658 [2024-11-18 13:27:22.617208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.658 13:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69384 00:10:53.229 [2024-11-18 13:27:23.013769] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:54.169 00:10:54.169 real 0m11.576s 00:10:54.169 user 0m18.417s 00:10:54.169 sys 0m2.125s 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.169 ************************************ 00:10:54.169 END TEST raid_state_function_test 00:10:54.169 ************************************ 00:10:54.169 13:27:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:54.169 13:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.169 13:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.169 13:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.169 ************************************ 00:10:54.169 START TEST raid_state_function_test_sb 00:10:54.169 ************************************ 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70063 00:10:54.169 Process raid pid: 70063 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70063' 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70063 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70063 ']' 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.169 13:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.429 [2024-11-18 13:27:24.288344] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:54.429 [2024-11-18 13:27:24.288548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.429 [2024-11-18 13:27:24.461566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.688 [2024-11-18 13:27:24.573904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.948 [2024-11-18 13:27:24.775643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.948 [2024-11-18 13:27:24.775777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 [2024-11-18 13:27:25.134589] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.208 [2024-11-18 13:27:25.134725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.208 [2024-11-18 13:27:25.134756] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.208 [2024-11-18 13:27:25.134781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.208 [2024-11-18 13:27:25.134800] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.208 [2024-11-18 13:27:25.134811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.208 [2024-11-18 13:27:25.134817] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.208 [2024-11-18 13:27:25.134826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.208 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.208 "name": "Existed_Raid", 00:10:55.208 "uuid": "3d891e18-6450-4222-a80d-8783f6798f3f", 00:10:55.208 "strip_size_kb": 64, 00:10:55.208 "state": "configuring", 00:10:55.208 "raid_level": "raid0", 00:10:55.208 "superblock": true, 00:10:55.208 "num_base_bdevs": 4, 00:10:55.208 "num_base_bdevs_discovered": 0, 00:10:55.208 "num_base_bdevs_operational": 4, 00:10:55.208 "base_bdevs_list": [ 00:10:55.208 { 00:10:55.208 "name": "BaseBdev1", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.208 "is_configured": false, 00:10:55.208 "data_offset": 0, 00:10:55.208 "data_size": 0 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "name": "BaseBdev2", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.208 "is_configured": false, 00:10:55.208 "data_offset": 0, 00:10:55.208 "data_size": 0 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "name": "BaseBdev3", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.208 "is_configured": false, 00:10:55.208 "data_offset": 0, 00:10:55.208 "data_size": 0 00:10:55.208 }, 00:10:55.208 { 00:10:55.208 "name": "BaseBdev4", 00:10:55.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.208 "is_configured": false, 00:10:55.208 "data_offset": 0, 00:10:55.208 "data_size": 0 00:10:55.208 } 00:10:55.208 ] 00:10:55.208 }' 00:10:55.209 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.209 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 [2024-11-18 13:27:25.601691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.779 [2024-11-18 13:27:25.601816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 [2024-11-18 13:27:25.613678] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.779 [2024-11-18 13:27:25.613769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.779 [2024-11-18 13:27:25.613796] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.779 [2024-11-18 13:27:25.613818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.779 [2024-11-18 13:27:25.613836] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.779 [2024-11-18 13:27:25.613856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.779 [2024-11-18 13:27:25.613874] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.779 [2024-11-18 13:27:25.613895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 [2024-11-18 13:27:25.662773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.779 BaseBdev1 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 [ 00:10:55.779 { 00:10:55.779 "name": "BaseBdev1", 00:10:55.779 "aliases": [ 00:10:55.779 "082ea906-d339-438e-bef1-625e8008fdda" 00:10:55.779 ], 00:10:55.779 "product_name": "Malloc disk", 00:10:55.779 "block_size": 512, 00:10:55.779 "num_blocks": 65536, 00:10:55.779 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:55.779 "assigned_rate_limits": { 00:10:55.779 "rw_ios_per_sec": 0, 00:10:55.779 "rw_mbytes_per_sec": 0, 00:10:55.779 "r_mbytes_per_sec": 0, 00:10:55.779 "w_mbytes_per_sec": 0 00:10:55.779 }, 00:10:55.779 "claimed": true, 00:10:55.779 "claim_type": "exclusive_write", 00:10:55.779 "zoned": false, 00:10:55.779 "supported_io_types": { 00:10:55.779 "read": true, 00:10:55.779 "write": true, 00:10:55.779 "unmap": true, 00:10:55.779 "flush": true, 00:10:55.779 "reset": true, 00:10:55.779 "nvme_admin": false, 00:10:55.779 "nvme_io": false, 00:10:55.779 "nvme_io_md": false, 00:10:55.779 "write_zeroes": true, 00:10:55.779 "zcopy": true, 00:10:55.779 "get_zone_info": false, 00:10:55.779 "zone_management": false, 00:10:55.779 "zone_append": false, 00:10:55.779 "compare": false, 00:10:55.779 "compare_and_write": false, 00:10:55.779 "abort": true, 00:10:55.779 "seek_hole": false, 00:10:55.779 "seek_data": false, 00:10:55.779 "copy": true, 00:10:55.779 "nvme_iov_md": false 00:10:55.779 }, 00:10:55.779 "memory_domains": [ 00:10:55.779 { 00:10:55.779 "dma_device_id": "system", 00:10:55.779 "dma_device_type": 1 00:10:55.779 }, 00:10:55.779 { 00:10:55.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.779 "dma_device_type": 2 00:10:55.779 } 00:10:55.779 ], 00:10:55.779 "driver_specific": {} 00:10:55.779 } 00:10:55.779 ] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.779 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.779 "name": "Existed_Raid", 00:10:55.779 "uuid": "ec364b49-13b9-4bd1-903c-3793998c8ce5", 00:10:55.779 "strip_size_kb": 64, 00:10:55.779 "state": "configuring", 00:10:55.779 "raid_level": "raid0", 00:10:55.779 "superblock": true, 00:10:55.779 "num_base_bdevs": 4, 00:10:55.779 "num_base_bdevs_discovered": 1, 00:10:55.779 "num_base_bdevs_operational": 4, 00:10:55.779 "base_bdevs_list": [ 00:10:55.779 { 00:10:55.779 "name": "BaseBdev1", 00:10:55.779 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:55.779 "is_configured": true, 00:10:55.779 "data_offset": 2048, 00:10:55.779 "data_size": 63488 00:10:55.779 }, 00:10:55.779 { 00:10:55.779 "name": "BaseBdev2", 00:10:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.779 "is_configured": false, 00:10:55.779 "data_offset": 0, 00:10:55.779 "data_size": 0 00:10:55.779 }, 00:10:55.779 { 00:10:55.779 "name": "BaseBdev3", 00:10:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.779 "is_configured": false, 00:10:55.779 "data_offset": 0, 00:10:55.779 "data_size": 0 00:10:55.779 }, 00:10:55.779 { 00:10:55.779 "name": "BaseBdev4", 00:10:55.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.780 "is_configured": false, 00:10:55.780 "data_offset": 0, 00:10:55.780 "data_size": 0 00:10:55.780 } 00:10:55.780 ] 00:10:55.780 }' 00:10:55.780 13:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.780 13:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 [2024-11-18 13:27:26.126185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.349 [2024-11-18 13:27:26.126243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 [2024-11-18 13:27:26.138203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.349 [2024-11-18 13:27:26.139998] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.349 [2024-11-18 13:27:26.140044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.349 [2024-11-18 13:27:26.140054] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.349 [2024-11-18 13:27:26.140064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.349 [2024-11-18 13:27:26.140071] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.349 [2024-11-18 13:27:26.140079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.349 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.349 "name": "Existed_Raid", 00:10:56.350 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:56.350 "strip_size_kb": 64, 00:10:56.350 "state": "configuring", 00:10:56.350 "raid_level": "raid0", 00:10:56.350 "superblock": true, 00:10:56.350 "num_base_bdevs": 4, 00:10:56.350 "num_base_bdevs_discovered": 1, 00:10:56.350 "num_base_bdevs_operational": 4, 00:10:56.350 "base_bdevs_list": [ 00:10:56.350 { 00:10:56.350 "name": "BaseBdev1", 00:10:56.350 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:56.350 "is_configured": true, 00:10:56.350 "data_offset": 2048, 00:10:56.350 "data_size": 63488 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev2", 00:10:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.350 "is_configured": false, 00:10:56.350 "data_offset": 0, 00:10:56.350 "data_size": 0 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev3", 00:10:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.350 "is_configured": false, 00:10:56.350 "data_offset": 0, 00:10:56.350 "data_size": 0 00:10:56.350 }, 00:10:56.350 { 00:10:56.350 "name": "BaseBdev4", 00:10:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.350 "is_configured": false, 00:10:56.350 "data_offset": 0, 00:10:56.350 "data_size": 0 00:10:56.350 } 00:10:56.350 ] 00:10:56.350 }' 00:10:56.350 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.350 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.609 [2024-11-18 13:27:26.646497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.609 BaseBdev2 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.609 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.868 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.868 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:56.868 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.868 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.868 [ 00:10:56.868 { 00:10:56.868 "name": "BaseBdev2", 00:10:56.868 "aliases": [ 00:10:56.868 "1f1f37ca-456f-42f1-b49a-6d78a6da30f4" 00:10:56.868 ], 00:10:56.869 "product_name": "Malloc disk", 00:10:56.869 "block_size": 512, 00:10:56.869 "num_blocks": 65536, 00:10:56.869 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:56.869 "assigned_rate_limits": { 00:10:56.869 "rw_ios_per_sec": 0, 00:10:56.869 "rw_mbytes_per_sec": 0, 00:10:56.869 "r_mbytes_per_sec": 0, 00:10:56.869 "w_mbytes_per_sec": 0 00:10:56.869 }, 00:10:56.869 "claimed": true, 00:10:56.869 "claim_type": "exclusive_write", 00:10:56.869 "zoned": false, 00:10:56.869 "supported_io_types": { 00:10:56.869 "read": true, 00:10:56.869 "write": true, 00:10:56.869 "unmap": true, 00:10:56.869 "flush": true, 00:10:56.869 "reset": true, 00:10:56.869 "nvme_admin": false, 00:10:56.869 "nvme_io": false, 00:10:56.869 "nvme_io_md": false, 00:10:56.869 "write_zeroes": true, 00:10:56.869 "zcopy": true, 00:10:56.869 "get_zone_info": false, 00:10:56.869 "zone_management": false, 00:10:56.869 "zone_append": false, 00:10:56.869 "compare": false, 00:10:56.869 "compare_and_write": false, 00:10:56.869 "abort": true, 00:10:56.869 "seek_hole": false, 00:10:56.869 "seek_data": false, 00:10:56.869 "copy": true, 00:10:56.869 "nvme_iov_md": false 00:10:56.869 }, 00:10:56.869 "memory_domains": [ 00:10:56.869 { 00:10:56.869 "dma_device_id": "system", 00:10:56.869 "dma_device_type": 1 00:10:56.869 }, 00:10:56.869 { 00:10:56.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.869 "dma_device_type": 2 00:10:56.869 } 00:10:56.869 ], 00:10:56.869 "driver_specific": {} 00:10:56.869 } 00:10:56.869 ] 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.869 "name": "Existed_Raid", 00:10:56.869 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:56.869 "strip_size_kb": 64, 00:10:56.869 "state": "configuring", 00:10:56.869 "raid_level": "raid0", 00:10:56.869 "superblock": true, 00:10:56.869 "num_base_bdevs": 4, 00:10:56.869 "num_base_bdevs_discovered": 2, 00:10:56.869 "num_base_bdevs_operational": 4, 00:10:56.869 "base_bdevs_list": [ 00:10:56.869 { 00:10:56.869 "name": "BaseBdev1", 00:10:56.869 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:56.869 "is_configured": true, 00:10:56.869 "data_offset": 2048, 00:10:56.869 "data_size": 63488 00:10:56.869 }, 00:10:56.869 { 00:10:56.869 "name": "BaseBdev2", 00:10:56.869 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:56.869 "is_configured": true, 00:10:56.869 "data_offset": 2048, 00:10:56.869 "data_size": 63488 00:10:56.869 }, 00:10:56.869 { 00:10:56.869 "name": "BaseBdev3", 00:10:56.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.869 "is_configured": false, 00:10:56.869 "data_offset": 0, 00:10:56.869 "data_size": 0 00:10:56.869 }, 00:10:56.869 { 00:10:56.869 "name": "BaseBdev4", 00:10:56.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.869 "is_configured": false, 00:10:56.869 "data_offset": 0, 00:10:56.869 "data_size": 0 00:10:56.869 } 00:10:56.869 ] 00:10:56.869 }' 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.869 13:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.129 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.129 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.129 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.390 [2024-11-18 13:27:27.182428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.390 BaseBdev3 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.390 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.390 [ 00:10:57.390 { 00:10:57.390 "name": "BaseBdev3", 00:10:57.390 "aliases": [ 00:10:57.390 "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a" 00:10:57.390 ], 00:10:57.390 "product_name": "Malloc disk", 00:10:57.390 "block_size": 512, 00:10:57.390 "num_blocks": 65536, 00:10:57.390 "uuid": "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a", 00:10:57.390 "assigned_rate_limits": { 00:10:57.390 "rw_ios_per_sec": 0, 00:10:57.390 "rw_mbytes_per_sec": 0, 00:10:57.390 "r_mbytes_per_sec": 0, 00:10:57.390 "w_mbytes_per_sec": 0 00:10:57.390 }, 00:10:57.390 "claimed": true, 00:10:57.390 "claim_type": "exclusive_write", 00:10:57.390 "zoned": false, 00:10:57.390 "supported_io_types": { 00:10:57.390 "read": true, 00:10:57.390 "write": true, 00:10:57.390 "unmap": true, 00:10:57.390 "flush": true, 00:10:57.390 "reset": true, 00:10:57.390 "nvme_admin": false, 00:10:57.390 "nvme_io": false, 00:10:57.390 "nvme_io_md": false, 00:10:57.390 "write_zeroes": true, 00:10:57.390 "zcopy": true, 00:10:57.390 "get_zone_info": false, 00:10:57.390 "zone_management": false, 00:10:57.391 "zone_append": false, 00:10:57.391 "compare": false, 00:10:57.391 "compare_and_write": false, 00:10:57.391 "abort": true, 00:10:57.391 "seek_hole": false, 00:10:57.391 "seek_data": false, 00:10:57.391 "copy": true, 00:10:57.391 "nvme_iov_md": false 00:10:57.391 }, 00:10:57.391 "memory_domains": [ 00:10:57.391 { 00:10:57.391 "dma_device_id": "system", 00:10:57.391 "dma_device_type": 1 00:10:57.391 }, 00:10:57.391 { 00:10:57.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.391 "dma_device_type": 2 00:10:57.391 } 00:10:57.391 ], 00:10:57.391 "driver_specific": {} 00:10:57.391 } 00:10:57.391 ] 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.391 "name": "Existed_Raid", 00:10:57.391 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:57.391 "strip_size_kb": 64, 00:10:57.391 "state": "configuring", 00:10:57.391 "raid_level": "raid0", 00:10:57.391 "superblock": true, 00:10:57.391 "num_base_bdevs": 4, 00:10:57.391 "num_base_bdevs_discovered": 3, 00:10:57.391 "num_base_bdevs_operational": 4, 00:10:57.391 "base_bdevs_list": [ 00:10:57.391 { 00:10:57.391 "name": "BaseBdev1", 00:10:57.391 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:57.391 "is_configured": true, 00:10:57.391 "data_offset": 2048, 00:10:57.391 "data_size": 63488 00:10:57.391 }, 00:10:57.391 { 00:10:57.391 "name": "BaseBdev2", 00:10:57.391 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:57.391 "is_configured": true, 00:10:57.391 "data_offset": 2048, 00:10:57.391 "data_size": 63488 00:10:57.391 }, 00:10:57.391 { 00:10:57.391 "name": "BaseBdev3", 00:10:57.391 "uuid": "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a", 00:10:57.391 "is_configured": true, 00:10:57.391 "data_offset": 2048, 00:10:57.391 "data_size": 63488 00:10:57.391 }, 00:10:57.391 { 00:10:57.391 "name": "BaseBdev4", 00:10:57.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.391 "is_configured": false, 00:10:57.391 "data_offset": 0, 00:10:57.391 "data_size": 0 00:10:57.391 } 00:10:57.391 ] 00:10:57.391 }' 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.391 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.650 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.650 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.650 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.909 [2024-11-18 13:27:27.715107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.909 [2024-11-18 13:27:27.715389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.909 [2024-11-18 13:27:27.715403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.909 BaseBdev4 00:10:57.909 [2024-11-18 13:27:27.715666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.909 [2024-11-18 13:27:27.715832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.909 [2024-11-18 13:27:27.715844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.909 [2024-11-18 13:27:27.715984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.909 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.909 [ 00:10:57.909 { 00:10:57.909 "name": "BaseBdev4", 00:10:57.909 "aliases": [ 00:10:57.909 "013bf69d-6f7c-4546-8ab7-5437cadd46c0" 00:10:57.909 ], 00:10:57.909 "product_name": "Malloc disk", 00:10:57.910 "block_size": 512, 00:10:57.910 "num_blocks": 65536, 00:10:57.910 "uuid": "013bf69d-6f7c-4546-8ab7-5437cadd46c0", 00:10:57.910 "assigned_rate_limits": { 00:10:57.910 "rw_ios_per_sec": 0, 00:10:57.910 "rw_mbytes_per_sec": 0, 00:10:57.910 "r_mbytes_per_sec": 0, 00:10:57.910 "w_mbytes_per_sec": 0 00:10:57.910 }, 00:10:57.910 "claimed": true, 00:10:57.910 "claim_type": "exclusive_write", 00:10:57.910 "zoned": false, 00:10:57.910 "supported_io_types": { 00:10:57.910 "read": true, 00:10:57.910 "write": true, 00:10:57.910 "unmap": true, 00:10:57.910 "flush": true, 00:10:57.910 "reset": true, 00:10:57.910 "nvme_admin": false, 00:10:57.910 "nvme_io": false, 00:10:57.910 "nvme_io_md": false, 00:10:57.910 "write_zeroes": true, 00:10:57.910 "zcopy": true, 00:10:57.910 "get_zone_info": false, 00:10:57.910 "zone_management": false, 00:10:57.910 "zone_append": false, 00:10:57.910 "compare": false, 00:10:57.910 "compare_and_write": false, 00:10:57.910 "abort": true, 00:10:57.910 "seek_hole": false, 00:10:57.910 "seek_data": false, 00:10:57.910 "copy": true, 00:10:57.910 "nvme_iov_md": false 00:10:57.910 }, 00:10:57.910 "memory_domains": [ 00:10:57.910 { 00:10:57.910 "dma_device_id": "system", 00:10:57.910 "dma_device_type": 1 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.910 "dma_device_type": 2 00:10:57.910 } 00:10:57.910 ], 00:10:57.910 "driver_specific": {} 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.910 "name": "Existed_Raid", 00:10:57.910 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:57.910 "strip_size_kb": 64, 00:10:57.910 "state": "online", 00:10:57.910 "raid_level": "raid0", 00:10:57.910 "superblock": true, 00:10:57.910 "num_base_bdevs": 4, 00:10:57.910 "num_base_bdevs_discovered": 4, 00:10:57.910 "num_base_bdevs_operational": 4, 00:10:57.910 "base_bdevs_list": [ 00:10:57.910 { 00:10:57.910 "name": "BaseBdev1", 00:10:57.910 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev2", 00:10:57.910 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev3", 00:10:57.910 "uuid": "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 }, 00:10:57.910 { 00:10:57.910 "name": "BaseBdev4", 00:10:57.910 "uuid": "013bf69d-6f7c-4546-8ab7-5437cadd46c0", 00:10:57.910 "is_configured": true, 00:10:57.910 "data_offset": 2048, 00:10:57.910 "data_size": 63488 00:10:57.910 } 00:10:57.910 ] 00:10:57.910 }' 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.910 13:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.171 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.434 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.435 [2024-11-18 13:27:28.238653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.435 "name": "Existed_Raid", 00:10:58.435 "aliases": [ 00:10:58.435 "e46f5b1a-ecee-49ae-9f25-d06b3224975e" 00:10:58.435 ], 00:10:58.435 "product_name": "Raid Volume", 00:10:58.435 "block_size": 512, 00:10:58.435 "num_blocks": 253952, 00:10:58.435 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:58.435 "assigned_rate_limits": { 00:10:58.435 "rw_ios_per_sec": 0, 00:10:58.435 "rw_mbytes_per_sec": 0, 00:10:58.435 "r_mbytes_per_sec": 0, 00:10:58.435 "w_mbytes_per_sec": 0 00:10:58.435 }, 00:10:58.435 "claimed": false, 00:10:58.435 "zoned": false, 00:10:58.435 "supported_io_types": { 00:10:58.435 "read": true, 00:10:58.435 "write": true, 00:10:58.435 "unmap": true, 00:10:58.435 "flush": true, 00:10:58.435 "reset": true, 00:10:58.435 "nvme_admin": false, 00:10:58.435 "nvme_io": false, 00:10:58.435 "nvme_io_md": false, 00:10:58.435 "write_zeroes": true, 00:10:58.435 "zcopy": false, 00:10:58.435 "get_zone_info": false, 00:10:58.435 "zone_management": false, 00:10:58.435 "zone_append": false, 00:10:58.435 "compare": false, 00:10:58.435 "compare_and_write": false, 00:10:58.435 "abort": false, 00:10:58.435 "seek_hole": false, 00:10:58.435 "seek_data": false, 00:10:58.435 "copy": false, 00:10:58.435 "nvme_iov_md": false 00:10:58.435 }, 00:10:58.435 "memory_domains": [ 00:10:58.435 { 00:10:58.435 "dma_device_id": "system", 00:10:58.435 "dma_device_type": 1 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.435 "dma_device_type": 2 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "system", 00:10:58.435 "dma_device_type": 1 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.435 "dma_device_type": 2 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "system", 00:10:58.435 "dma_device_type": 1 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.435 "dma_device_type": 2 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "system", 00:10:58.435 "dma_device_type": 1 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.435 "dma_device_type": 2 00:10:58.435 } 00:10:58.435 ], 00:10:58.435 "driver_specific": { 00:10:58.435 "raid": { 00:10:58.435 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:58.435 "strip_size_kb": 64, 00:10:58.435 "state": "online", 00:10:58.435 "raid_level": "raid0", 00:10:58.435 "superblock": true, 00:10:58.435 "num_base_bdevs": 4, 00:10:58.435 "num_base_bdevs_discovered": 4, 00:10:58.435 "num_base_bdevs_operational": 4, 00:10:58.435 "base_bdevs_list": [ 00:10:58.435 { 00:10:58.435 "name": "BaseBdev1", 00:10:58.435 "uuid": "082ea906-d339-438e-bef1-625e8008fdda", 00:10:58.435 "is_configured": true, 00:10:58.435 "data_offset": 2048, 00:10:58.435 "data_size": 63488 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "name": "BaseBdev2", 00:10:58.435 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:58.435 "is_configured": true, 00:10:58.435 "data_offset": 2048, 00:10:58.435 "data_size": 63488 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "name": "BaseBdev3", 00:10:58.435 "uuid": "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a", 00:10:58.435 "is_configured": true, 00:10:58.435 "data_offset": 2048, 00:10:58.435 "data_size": 63488 00:10:58.435 }, 00:10:58.435 { 00:10:58.435 "name": "BaseBdev4", 00:10:58.435 "uuid": "013bf69d-6f7c-4546-8ab7-5437cadd46c0", 00:10:58.435 "is_configured": true, 00:10:58.435 "data_offset": 2048, 00:10:58.435 "data_size": 63488 00:10:58.435 } 00:10:58.435 ] 00:10:58.435 } 00:10:58.435 } 00:10:58.435 }' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:58.435 BaseBdev2 00:10:58.435 BaseBdev3 00:10:58.435 BaseBdev4' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.435 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.695 [2024-11-18 13:27:28.557798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.695 [2024-11-18 13:27:28.557885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.695 [2024-11-18 13:27:28.557963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.695 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.695 "name": "Existed_Raid", 00:10:58.695 "uuid": "e46f5b1a-ecee-49ae-9f25-d06b3224975e", 00:10:58.695 "strip_size_kb": 64, 00:10:58.695 "state": "offline", 00:10:58.695 "raid_level": "raid0", 00:10:58.695 "superblock": true, 00:10:58.695 "num_base_bdevs": 4, 00:10:58.695 "num_base_bdevs_discovered": 3, 00:10:58.695 "num_base_bdevs_operational": 3, 00:10:58.695 "base_bdevs_list": [ 00:10:58.695 { 00:10:58.695 "name": null, 00:10:58.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.695 "is_configured": false, 00:10:58.695 "data_offset": 0, 00:10:58.695 "data_size": 63488 00:10:58.695 }, 00:10:58.695 { 00:10:58.695 "name": "BaseBdev2", 00:10:58.695 "uuid": "1f1f37ca-456f-42f1-b49a-6d78a6da30f4", 00:10:58.696 "is_configured": true, 00:10:58.696 "data_offset": 2048, 00:10:58.696 "data_size": 63488 00:10:58.696 }, 00:10:58.696 { 00:10:58.696 "name": "BaseBdev3", 00:10:58.696 "uuid": "9407a6aa-e2e6-4a92-9d07-c396f3f17c1a", 00:10:58.696 "is_configured": true, 00:10:58.696 "data_offset": 2048, 00:10:58.696 "data_size": 63488 00:10:58.696 }, 00:10:58.696 { 00:10:58.696 "name": "BaseBdev4", 00:10:58.696 "uuid": "013bf69d-6f7c-4546-8ab7-5437cadd46c0", 00:10:58.696 "is_configured": true, 00:10:58.696 "data_offset": 2048, 00:10:58.696 "data_size": 63488 00:10:58.696 } 00:10:58.696 ] 00:10:58.696 }' 00:10:58.696 13:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.696 13:27:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.265 [2024-11-18 13:27:29.200645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.265 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 [2024-11-18 13:27:29.352942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.525 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.525 [2024-11-18 13:27:29.503790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:59.525 [2024-11-18 13:27:29.503895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.785 BaseBdev2 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.785 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.786 [ 00:10:59.786 { 00:10:59.786 "name": "BaseBdev2", 00:10:59.786 "aliases": [ 00:10:59.786 "155fd272-2432-4db1-bc0b-61dfea89e35d" 00:10:59.786 ], 00:10:59.786 "product_name": "Malloc disk", 00:10:59.786 "block_size": 512, 00:10:59.786 "num_blocks": 65536, 00:10:59.786 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:10:59.786 "assigned_rate_limits": { 00:10:59.786 "rw_ios_per_sec": 0, 00:10:59.786 "rw_mbytes_per_sec": 0, 00:10:59.786 "r_mbytes_per_sec": 0, 00:10:59.786 "w_mbytes_per_sec": 0 00:10:59.786 }, 00:10:59.786 "claimed": false, 00:10:59.786 "zoned": false, 00:10:59.786 "supported_io_types": { 00:10:59.786 "read": true, 00:10:59.786 "write": true, 00:10:59.786 "unmap": true, 00:10:59.786 "flush": true, 00:10:59.786 "reset": true, 00:10:59.786 "nvme_admin": false, 00:10:59.786 "nvme_io": false, 00:10:59.786 "nvme_io_md": false, 00:10:59.786 "write_zeroes": true, 00:10:59.786 "zcopy": true, 00:10:59.786 "get_zone_info": false, 00:10:59.786 "zone_management": false, 00:10:59.786 "zone_append": false, 00:10:59.786 "compare": false, 00:10:59.786 "compare_and_write": false, 00:10:59.786 "abort": true, 00:10:59.786 "seek_hole": false, 00:10:59.786 "seek_data": false, 00:10:59.786 "copy": true, 00:10:59.786 "nvme_iov_md": false 00:10:59.786 }, 00:10:59.786 "memory_domains": [ 00:10:59.786 { 00:10:59.786 "dma_device_id": "system", 00:10:59.786 "dma_device_type": 1 00:10:59.786 }, 00:10:59.786 { 00:10:59.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.786 "dma_device_type": 2 00:10:59.786 } 00:10:59.786 ], 00:10:59.786 "driver_specific": {} 00:10:59.786 } 00:10:59.786 ] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.786 BaseBdev3 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.786 [ 00:10:59.786 { 00:10:59.786 "name": "BaseBdev3", 00:10:59.786 "aliases": [ 00:10:59.786 "0a27fa14-03e9-4c37-8ceb-b919924b5979" 00:10:59.786 ], 00:10:59.786 "product_name": "Malloc disk", 00:10:59.786 "block_size": 512, 00:10:59.786 "num_blocks": 65536, 00:10:59.786 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:10:59.786 "assigned_rate_limits": { 00:10:59.786 "rw_ios_per_sec": 0, 00:10:59.786 "rw_mbytes_per_sec": 0, 00:10:59.786 "r_mbytes_per_sec": 0, 00:10:59.786 "w_mbytes_per_sec": 0 00:10:59.786 }, 00:10:59.786 "claimed": false, 00:10:59.786 "zoned": false, 00:10:59.786 "supported_io_types": { 00:10:59.786 "read": true, 00:10:59.786 "write": true, 00:10:59.786 "unmap": true, 00:10:59.786 "flush": true, 00:10:59.786 "reset": true, 00:10:59.786 "nvme_admin": false, 00:10:59.786 "nvme_io": false, 00:10:59.786 "nvme_io_md": false, 00:10:59.786 "write_zeroes": true, 00:10:59.786 "zcopy": true, 00:10:59.786 "get_zone_info": false, 00:10:59.786 "zone_management": false, 00:10:59.786 "zone_append": false, 00:10:59.786 "compare": false, 00:10:59.786 "compare_and_write": false, 00:10:59.786 "abort": true, 00:10:59.786 "seek_hole": false, 00:10:59.786 "seek_data": false, 00:10:59.786 "copy": true, 00:10:59.786 "nvme_iov_md": false 00:10:59.786 }, 00:10:59.786 "memory_domains": [ 00:10:59.786 { 00:10:59.786 "dma_device_id": "system", 00:10:59.786 "dma_device_type": 1 00:10:59.786 }, 00:10:59.786 { 00:10:59.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.786 "dma_device_type": 2 00:10:59.786 } 00:10:59.786 ], 00:10:59.786 "driver_specific": {} 00:10:59.786 } 00:10:59.786 ] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.786 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.787 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.787 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.787 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.787 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.046 BaseBdev4 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.046 [ 00:11:00.046 { 00:11:00.046 "name": "BaseBdev4", 00:11:00.046 "aliases": [ 00:11:00.046 "f8d0a892-975a-487f-9095-b6e6870b5221" 00:11:00.046 ], 00:11:00.046 "product_name": "Malloc disk", 00:11:00.046 "block_size": 512, 00:11:00.046 "num_blocks": 65536, 00:11:00.046 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:00.046 "assigned_rate_limits": { 00:11:00.046 "rw_ios_per_sec": 0, 00:11:00.046 "rw_mbytes_per_sec": 0, 00:11:00.046 "r_mbytes_per_sec": 0, 00:11:00.046 "w_mbytes_per_sec": 0 00:11:00.046 }, 00:11:00.046 "claimed": false, 00:11:00.046 "zoned": false, 00:11:00.046 "supported_io_types": { 00:11:00.046 "read": true, 00:11:00.046 "write": true, 00:11:00.046 "unmap": true, 00:11:00.046 "flush": true, 00:11:00.046 "reset": true, 00:11:00.046 "nvme_admin": false, 00:11:00.046 "nvme_io": false, 00:11:00.046 "nvme_io_md": false, 00:11:00.046 "write_zeroes": true, 00:11:00.046 "zcopy": true, 00:11:00.046 "get_zone_info": false, 00:11:00.046 "zone_management": false, 00:11:00.046 "zone_append": false, 00:11:00.046 "compare": false, 00:11:00.046 "compare_and_write": false, 00:11:00.046 "abort": true, 00:11:00.046 "seek_hole": false, 00:11:00.046 "seek_data": false, 00:11:00.046 "copy": true, 00:11:00.046 "nvme_iov_md": false 00:11:00.046 }, 00:11:00.046 "memory_domains": [ 00:11:00.046 { 00:11:00.046 "dma_device_id": "system", 00:11:00.046 "dma_device_type": 1 00:11:00.046 }, 00:11:00.046 { 00:11:00.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.046 "dma_device_type": 2 00:11:00.046 } 00:11:00.046 ], 00:11:00.046 "driver_specific": {} 00:11:00.046 } 00:11:00.046 ] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.046 [2024-11-18 13:27:29.896303] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.046 [2024-11-18 13:27:29.896426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.046 [2024-11-18 13:27:29.896467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.046 [2024-11-18 13:27:29.898219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.046 [2024-11-18 13:27:29.898306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.046 "name": "Existed_Raid", 00:11:00.046 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:00.046 "strip_size_kb": 64, 00:11:00.046 "state": "configuring", 00:11:00.046 "raid_level": "raid0", 00:11:00.046 "superblock": true, 00:11:00.046 "num_base_bdevs": 4, 00:11:00.046 "num_base_bdevs_discovered": 3, 00:11:00.046 "num_base_bdevs_operational": 4, 00:11:00.046 "base_bdevs_list": [ 00:11:00.046 { 00:11:00.046 "name": "BaseBdev1", 00:11:00.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.046 "is_configured": false, 00:11:00.046 "data_offset": 0, 00:11:00.046 "data_size": 0 00:11:00.046 }, 00:11:00.046 { 00:11:00.046 "name": "BaseBdev2", 00:11:00.046 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:00.046 "is_configured": true, 00:11:00.046 "data_offset": 2048, 00:11:00.046 "data_size": 63488 00:11:00.046 }, 00:11:00.046 { 00:11:00.046 "name": "BaseBdev3", 00:11:00.046 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:00.046 "is_configured": true, 00:11:00.046 "data_offset": 2048, 00:11:00.046 "data_size": 63488 00:11:00.046 }, 00:11:00.046 { 00:11:00.046 "name": "BaseBdev4", 00:11:00.046 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:00.046 "is_configured": true, 00:11:00.046 "data_offset": 2048, 00:11:00.046 "data_size": 63488 00:11:00.046 } 00:11:00.046 ] 00:11:00.046 }' 00:11:00.046 13:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.047 13:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.306 [2024-11-18 13:27:30.327620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.306 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.307 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.307 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.566 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.566 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.566 "name": "Existed_Raid", 00:11:00.566 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:00.566 "strip_size_kb": 64, 00:11:00.566 "state": "configuring", 00:11:00.566 "raid_level": "raid0", 00:11:00.566 "superblock": true, 00:11:00.566 "num_base_bdevs": 4, 00:11:00.566 "num_base_bdevs_discovered": 2, 00:11:00.566 "num_base_bdevs_operational": 4, 00:11:00.566 "base_bdevs_list": [ 00:11:00.566 { 00:11:00.566 "name": "BaseBdev1", 00:11:00.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.566 "is_configured": false, 00:11:00.566 "data_offset": 0, 00:11:00.566 "data_size": 0 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "name": null, 00:11:00.566 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:00.566 "is_configured": false, 00:11:00.566 "data_offset": 0, 00:11:00.566 "data_size": 63488 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "name": "BaseBdev3", 00:11:00.566 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:00.566 "is_configured": true, 00:11:00.566 "data_offset": 2048, 00:11:00.566 "data_size": 63488 00:11:00.566 }, 00:11:00.566 { 00:11:00.566 "name": "BaseBdev4", 00:11:00.566 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:00.566 "is_configured": true, 00:11:00.566 "data_offset": 2048, 00:11:00.566 "data_size": 63488 00:11:00.566 } 00:11:00.566 ] 00:11:00.566 }' 00:11:00.566 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.566 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.827 [2024-11-18 13:27:30.819498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.827 BaseBdev1 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.827 [ 00:11:00.827 { 00:11:00.827 "name": "BaseBdev1", 00:11:00.827 "aliases": [ 00:11:00.827 "9a133b57-5de2-4100-a46a-eae9f5da2a9c" 00:11:00.827 ], 00:11:00.827 "product_name": "Malloc disk", 00:11:00.827 "block_size": 512, 00:11:00.827 "num_blocks": 65536, 00:11:00.827 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:00.827 "assigned_rate_limits": { 00:11:00.827 "rw_ios_per_sec": 0, 00:11:00.827 "rw_mbytes_per_sec": 0, 00:11:00.827 "r_mbytes_per_sec": 0, 00:11:00.827 "w_mbytes_per_sec": 0 00:11:00.827 }, 00:11:00.827 "claimed": true, 00:11:00.827 "claim_type": "exclusive_write", 00:11:00.827 "zoned": false, 00:11:00.827 "supported_io_types": { 00:11:00.827 "read": true, 00:11:00.827 "write": true, 00:11:00.827 "unmap": true, 00:11:00.827 "flush": true, 00:11:00.827 "reset": true, 00:11:00.827 "nvme_admin": false, 00:11:00.827 "nvme_io": false, 00:11:00.827 "nvme_io_md": false, 00:11:00.827 "write_zeroes": true, 00:11:00.827 "zcopy": true, 00:11:00.827 "get_zone_info": false, 00:11:00.827 "zone_management": false, 00:11:00.827 "zone_append": false, 00:11:00.827 "compare": false, 00:11:00.827 "compare_and_write": false, 00:11:00.827 "abort": true, 00:11:00.827 "seek_hole": false, 00:11:00.827 "seek_data": false, 00:11:00.827 "copy": true, 00:11:00.827 "nvme_iov_md": false 00:11:00.827 }, 00:11:00.827 "memory_domains": [ 00:11:00.827 { 00:11:00.827 "dma_device_id": "system", 00:11:00.827 "dma_device_type": 1 00:11:00.827 }, 00:11:00.827 { 00:11:00.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.827 "dma_device_type": 2 00:11:00.827 } 00:11:00.827 ], 00:11:00.827 "driver_specific": {} 00:11:00.827 } 00:11:00.827 ] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.827 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.087 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.087 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.087 "name": "Existed_Raid", 00:11:01.087 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:01.087 "strip_size_kb": 64, 00:11:01.087 "state": "configuring", 00:11:01.087 "raid_level": "raid0", 00:11:01.087 "superblock": true, 00:11:01.087 "num_base_bdevs": 4, 00:11:01.087 "num_base_bdevs_discovered": 3, 00:11:01.087 "num_base_bdevs_operational": 4, 00:11:01.087 "base_bdevs_list": [ 00:11:01.087 { 00:11:01.087 "name": "BaseBdev1", 00:11:01.087 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:01.087 "is_configured": true, 00:11:01.087 "data_offset": 2048, 00:11:01.087 "data_size": 63488 00:11:01.087 }, 00:11:01.087 { 00:11:01.087 "name": null, 00:11:01.087 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:01.087 "is_configured": false, 00:11:01.087 "data_offset": 0, 00:11:01.087 "data_size": 63488 00:11:01.088 }, 00:11:01.088 { 00:11:01.088 "name": "BaseBdev3", 00:11:01.088 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:01.088 "is_configured": true, 00:11:01.088 "data_offset": 2048, 00:11:01.088 "data_size": 63488 00:11:01.088 }, 00:11:01.088 { 00:11:01.088 "name": "BaseBdev4", 00:11:01.088 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:01.088 "is_configured": true, 00:11:01.088 "data_offset": 2048, 00:11:01.088 "data_size": 63488 00:11:01.088 } 00:11:01.088 ] 00:11:01.088 }' 00:11:01.088 13:27:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.088 13:27:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.348 [2024-11-18 13:27:31.386709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.348 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.607 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.607 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.607 "name": "Existed_Raid", 00:11:01.607 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:01.607 "strip_size_kb": 64, 00:11:01.607 "state": "configuring", 00:11:01.607 "raid_level": "raid0", 00:11:01.607 "superblock": true, 00:11:01.607 "num_base_bdevs": 4, 00:11:01.607 "num_base_bdevs_discovered": 2, 00:11:01.607 "num_base_bdevs_operational": 4, 00:11:01.607 "base_bdevs_list": [ 00:11:01.607 { 00:11:01.607 "name": "BaseBdev1", 00:11:01.607 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:01.607 "is_configured": true, 00:11:01.607 "data_offset": 2048, 00:11:01.607 "data_size": 63488 00:11:01.607 }, 00:11:01.607 { 00:11:01.607 "name": null, 00:11:01.607 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:01.607 "is_configured": false, 00:11:01.607 "data_offset": 0, 00:11:01.607 "data_size": 63488 00:11:01.607 }, 00:11:01.607 { 00:11:01.607 "name": null, 00:11:01.607 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:01.607 "is_configured": false, 00:11:01.607 "data_offset": 0, 00:11:01.607 "data_size": 63488 00:11:01.607 }, 00:11:01.607 { 00:11:01.607 "name": "BaseBdev4", 00:11:01.607 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:01.607 "is_configured": true, 00:11:01.607 "data_offset": 2048, 00:11:01.607 "data_size": 63488 00:11:01.607 } 00:11:01.607 ] 00:11:01.607 }' 00:11:01.607 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.607 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.867 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.867 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.867 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.867 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.867 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.127 [2024-11-18 13:27:31.929817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.127 "name": "Existed_Raid", 00:11:02.127 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:02.127 "strip_size_kb": 64, 00:11:02.127 "state": "configuring", 00:11:02.127 "raid_level": "raid0", 00:11:02.127 "superblock": true, 00:11:02.127 "num_base_bdevs": 4, 00:11:02.127 "num_base_bdevs_discovered": 3, 00:11:02.127 "num_base_bdevs_operational": 4, 00:11:02.127 "base_bdevs_list": [ 00:11:02.127 { 00:11:02.127 "name": "BaseBdev1", 00:11:02.127 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:02.127 "is_configured": true, 00:11:02.127 "data_offset": 2048, 00:11:02.127 "data_size": 63488 00:11:02.127 }, 00:11:02.127 { 00:11:02.127 "name": null, 00:11:02.127 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:02.127 "is_configured": false, 00:11:02.127 "data_offset": 0, 00:11:02.127 "data_size": 63488 00:11:02.127 }, 00:11:02.127 { 00:11:02.127 "name": "BaseBdev3", 00:11:02.127 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:02.127 "is_configured": true, 00:11:02.127 "data_offset": 2048, 00:11:02.127 "data_size": 63488 00:11:02.127 }, 00:11:02.127 { 00:11:02.127 "name": "BaseBdev4", 00:11:02.127 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:02.127 "is_configured": true, 00:11:02.127 "data_offset": 2048, 00:11:02.127 "data_size": 63488 00:11:02.127 } 00:11:02.127 ] 00:11:02.127 }' 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.127 13:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.387 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.387 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.387 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.387 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.647 [2024-11-18 13:27:32.472884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.647 "name": "Existed_Raid", 00:11:02.647 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:02.647 "strip_size_kb": 64, 00:11:02.647 "state": "configuring", 00:11:02.647 "raid_level": "raid0", 00:11:02.647 "superblock": true, 00:11:02.647 "num_base_bdevs": 4, 00:11:02.647 "num_base_bdevs_discovered": 2, 00:11:02.647 "num_base_bdevs_operational": 4, 00:11:02.647 "base_bdevs_list": [ 00:11:02.647 { 00:11:02.647 "name": null, 00:11:02.647 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:02.647 "is_configured": false, 00:11:02.647 "data_offset": 0, 00:11:02.647 "data_size": 63488 00:11:02.647 }, 00:11:02.647 { 00:11:02.647 "name": null, 00:11:02.647 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:02.647 "is_configured": false, 00:11:02.647 "data_offset": 0, 00:11:02.647 "data_size": 63488 00:11:02.647 }, 00:11:02.647 { 00:11:02.647 "name": "BaseBdev3", 00:11:02.647 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:02.647 "is_configured": true, 00:11:02.647 "data_offset": 2048, 00:11:02.647 "data_size": 63488 00:11:02.647 }, 00:11:02.647 { 00:11:02.647 "name": "BaseBdev4", 00:11:02.647 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:02.647 "is_configured": true, 00:11:02.647 "data_offset": 2048, 00:11:02.647 "data_size": 63488 00:11:02.647 } 00:11:02.647 ] 00:11:02.647 }' 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.647 13:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.228 [2024-11-18 13:27:33.069720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.228 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.229 "name": "Existed_Raid", 00:11:03.229 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:03.229 "strip_size_kb": 64, 00:11:03.229 "state": "configuring", 00:11:03.229 "raid_level": "raid0", 00:11:03.229 "superblock": true, 00:11:03.229 "num_base_bdevs": 4, 00:11:03.229 "num_base_bdevs_discovered": 3, 00:11:03.229 "num_base_bdevs_operational": 4, 00:11:03.229 "base_bdevs_list": [ 00:11:03.229 { 00:11:03.229 "name": null, 00:11:03.229 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:03.229 "is_configured": false, 00:11:03.229 "data_offset": 0, 00:11:03.229 "data_size": 63488 00:11:03.229 }, 00:11:03.229 { 00:11:03.229 "name": "BaseBdev2", 00:11:03.229 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:03.229 "is_configured": true, 00:11:03.229 "data_offset": 2048, 00:11:03.229 "data_size": 63488 00:11:03.229 }, 00:11:03.229 { 00:11:03.229 "name": "BaseBdev3", 00:11:03.229 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:03.229 "is_configured": true, 00:11:03.229 "data_offset": 2048, 00:11:03.229 "data_size": 63488 00:11:03.229 }, 00:11:03.229 { 00:11:03.229 "name": "BaseBdev4", 00:11:03.229 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:03.229 "is_configured": true, 00:11:03.229 "data_offset": 2048, 00:11:03.229 "data_size": 63488 00:11:03.229 } 00:11:03.229 ] 00:11:03.229 }' 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.229 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.488 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a133b57-5de2-4100-a46a-eae9f5da2a9c 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.748 [2024-11-18 13:27:33.621074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.748 [2024-11-18 13:27:33.621350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.748 [2024-11-18 13:27:33.621363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.748 [2024-11-18 13:27:33.621611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:03.748 [2024-11-18 13:27:33.621761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.748 [2024-11-18 13:27:33.621773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:03.748 [2024-11-18 13:27:33.621900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.748 NewBaseBdev 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.748 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.748 [ 00:11:03.748 { 00:11:03.748 "name": "NewBaseBdev", 00:11:03.748 "aliases": [ 00:11:03.748 "9a133b57-5de2-4100-a46a-eae9f5da2a9c" 00:11:03.748 ], 00:11:03.748 "product_name": "Malloc disk", 00:11:03.748 "block_size": 512, 00:11:03.748 "num_blocks": 65536, 00:11:03.748 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:03.748 "assigned_rate_limits": { 00:11:03.748 "rw_ios_per_sec": 0, 00:11:03.748 "rw_mbytes_per_sec": 0, 00:11:03.748 "r_mbytes_per_sec": 0, 00:11:03.748 "w_mbytes_per_sec": 0 00:11:03.748 }, 00:11:03.748 "claimed": true, 00:11:03.748 "claim_type": "exclusive_write", 00:11:03.748 "zoned": false, 00:11:03.748 "supported_io_types": { 00:11:03.748 "read": true, 00:11:03.748 "write": true, 00:11:03.748 "unmap": true, 00:11:03.748 "flush": true, 00:11:03.748 "reset": true, 00:11:03.748 "nvme_admin": false, 00:11:03.748 "nvme_io": false, 00:11:03.748 "nvme_io_md": false, 00:11:03.748 "write_zeroes": true, 00:11:03.748 "zcopy": true, 00:11:03.748 "get_zone_info": false, 00:11:03.748 "zone_management": false, 00:11:03.748 "zone_append": false, 00:11:03.748 "compare": false, 00:11:03.749 "compare_and_write": false, 00:11:03.749 "abort": true, 00:11:03.749 "seek_hole": false, 00:11:03.749 "seek_data": false, 00:11:03.749 "copy": true, 00:11:03.749 "nvme_iov_md": false 00:11:03.749 }, 00:11:03.749 "memory_domains": [ 00:11:03.749 { 00:11:03.749 "dma_device_id": "system", 00:11:03.749 "dma_device_type": 1 00:11:03.749 }, 00:11:03.749 { 00:11:03.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.749 "dma_device_type": 2 00:11:03.749 } 00:11:03.749 ], 00:11:03.749 "driver_specific": {} 00:11:03.749 } 00:11:03.749 ] 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.749 "name": "Existed_Raid", 00:11:03.749 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:03.749 "strip_size_kb": 64, 00:11:03.749 "state": "online", 00:11:03.749 "raid_level": "raid0", 00:11:03.749 "superblock": true, 00:11:03.749 "num_base_bdevs": 4, 00:11:03.749 "num_base_bdevs_discovered": 4, 00:11:03.749 "num_base_bdevs_operational": 4, 00:11:03.749 "base_bdevs_list": [ 00:11:03.749 { 00:11:03.749 "name": "NewBaseBdev", 00:11:03.749 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:03.749 "is_configured": true, 00:11:03.749 "data_offset": 2048, 00:11:03.749 "data_size": 63488 00:11:03.749 }, 00:11:03.749 { 00:11:03.749 "name": "BaseBdev2", 00:11:03.749 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:03.749 "is_configured": true, 00:11:03.749 "data_offset": 2048, 00:11:03.749 "data_size": 63488 00:11:03.749 }, 00:11:03.749 { 00:11:03.749 "name": "BaseBdev3", 00:11:03.749 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:03.749 "is_configured": true, 00:11:03.749 "data_offset": 2048, 00:11:03.749 "data_size": 63488 00:11:03.749 }, 00:11:03.749 { 00:11:03.749 "name": "BaseBdev4", 00:11:03.749 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:03.749 "is_configured": true, 00:11:03.749 "data_offset": 2048, 00:11:03.749 "data_size": 63488 00:11:03.749 } 00:11:03.749 ] 00:11:03.749 }' 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.749 13:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.318 [2024-11-18 13:27:34.136609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.318 "name": "Existed_Raid", 00:11:04.318 "aliases": [ 00:11:04.318 "13931085-a277-4ca5-81d2-bf8f90929edd" 00:11:04.318 ], 00:11:04.318 "product_name": "Raid Volume", 00:11:04.318 "block_size": 512, 00:11:04.318 "num_blocks": 253952, 00:11:04.318 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:04.318 "assigned_rate_limits": { 00:11:04.318 "rw_ios_per_sec": 0, 00:11:04.318 "rw_mbytes_per_sec": 0, 00:11:04.318 "r_mbytes_per_sec": 0, 00:11:04.318 "w_mbytes_per_sec": 0 00:11:04.318 }, 00:11:04.318 "claimed": false, 00:11:04.318 "zoned": false, 00:11:04.318 "supported_io_types": { 00:11:04.318 "read": true, 00:11:04.318 "write": true, 00:11:04.318 "unmap": true, 00:11:04.318 "flush": true, 00:11:04.318 "reset": true, 00:11:04.318 "nvme_admin": false, 00:11:04.318 "nvme_io": false, 00:11:04.318 "nvme_io_md": false, 00:11:04.318 "write_zeroes": true, 00:11:04.318 "zcopy": false, 00:11:04.318 "get_zone_info": false, 00:11:04.318 "zone_management": false, 00:11:04.318 "zone_append": false, 00:11:04.318 "compare": false, 00:11:04.318 "compare_and_write": false, 00:11:04.318 "abort": false, 00:11:04.318 "seek_hole": false, 00:11:04.318 "seek_data": false, 00:11:04.318 "copy": false, 00:11:04.318 "nvme_iov_md": false 00:11:04.318 }, 00:11:04.318 "memory_domains": [ 00:11:04.318 { 00:11:04.318 "dma_device_id": "system", 00:11:04.318 "dma_device_type": 1 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.318 "dma_device_type": 2 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "system", 00:11:04.318 "dma_device_type": 1 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.318 "dma_device_type": 2 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "system", 00:11:04.318 "dma_device_type": 1 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.318 "dma_device_type": 2 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "system", 00:11:04.318 "dma_device_type": 1 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.318 "dma_device_type": 2 00:11:04.318 } 00:11:04.318 ], 00:11:04.318 "driver_specific": { 00:11:04.318 "raid": { 00:11:04.318 "uuid": "13931085-a277-4ca5-81d2-bf8f90929edd", 00:11:04.318 "strip_size_kb": 64, 00:11:04.318 "state": "online", 00:11:04.318 "raid_level": "raid0", 00:11:04.318 "superblock": true, 00:11:04.318 "num_base_bdevs": 4, 00:11:04.318 "num_base_bdevs_discovered": 4, 00:11:04.318 "num_base_bdevs_operational": 4, 00:11:04.318 "base_bdevs_list": [ 00:11:04.318 { 00:11:04.318 "name": "NewBaseBdev", 00:11:04.318 "uuid": "9a133b57-5de2-4100-a46a-eae9f5da2a9c", 00:11:04.318 "is_configured": true, 00:11:04.318 "data_offset": 2048, 00:11:04.318 "data_size": 63488 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "name": "BaseBdev2", 00:11:04.318 "uuid": "155fd272-2432-4db1-bc0b-61dfea89e35d", 00:11:04.318 "is_configured": true, 00:11:04.318 "data_offset": 2048, 00:11:04.318 "data_size": 63488 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "name": "BaseBdev3", 00:11:04.318 "uuid": "0a27fa14-03e9-4c37-8ceb-b919924b5979", 00:11:04.318 "is_configured": true, 00:11:04.318 "data_offset": 2048, 00:11:04.318 "data_size": 63488 00:11:04.318 }, 00:11:04.318 { 00:11:04.318 "name": "BaseBdev4", 00:11:04.318 "uuid": "f8d0a892-975a-487f-9095-b6e6870b5221", 00:11:04.318 "is_configured": true, 00:11:04.318 "data_offset": 2048, 00:11:04.318 "data_size": 63488 00:11:04.318 } 00:11:04.318 ] 00:11:04.318 } 00:11:04.318 } 00:11:04.318 }' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:04.318 BaseBdev2 00:11:04.318 BaseBdev3 00:11:04.318 BaseBdev4' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.318 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.578 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.579 [2024-11-18 13:27:34.499601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.579 [2024-11-18 13:27:34.499695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.579 [2024-11-18 13:27:34.499792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.579 [2024-11-18 13:27:34.499877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.579 [2024-11-18 13:27:34.499936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70063 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70063 ']' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70063 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70063 00:11:04.579 killing process with pid 70063 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70063' 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70063 00:11:04.579 [2024-11-18 13:27:34.547247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.579 13:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70063 00:11:05.148 [2024-11-18 13:27:34.942887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.088 13:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:06.088 00:11:06.088 real 0m11.851s 00:11:06.088 user 0m18.876s 00:11:06.088 sys 0m2.214s 00:11:06.088 13:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.088 13:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.088 ************************************ 00:11:06.088 END TEST raid_state_function_test_sb 00:11:06.088 ************************************ 00:11:06.088 13:27:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:06.088 13:27:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.088 13:27:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.088 13:27:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.088 ************************************ 00:11:06.088 START TEST raid_superblock_test 00:11:06.088 ************************************ 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70728 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70728 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70728 ']' 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.088 13:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.348 [2024-11-18 13:27:36.202986] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:06.348 [2024-11-18 13:27:36.203196] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70728 ] 00:11:06.348 [2024-11-18 13:27:36.377364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.608 [2024-11-18 13:27:36.493401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.868 [2024-11-18 13:27:36.693063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.868 [2024-11-18 13:27:36.693212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.127 malloc1 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.127 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.128 [2024-11-18 13:27:37.109729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.128 [2024-11-18 13:27:37.109799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.128 [2024-11-18 13:27:37.109822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:07.128 [2024-11-18 13:27:37.109842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.128 [2024-11-18 13:27:37.111910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.128 [2024-11-18 13:27:37.112032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.128 pt1 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.128 malloc2 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.128 [2024-11-18 13:27:37.164894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.128 [2024-11-18 13:27:37.165042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.128 [2024-11-18 13:27:37.165081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:07.128 [2024-11-18 13:27:37.165113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.128 [2024-11-18 13:27:37.167283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.128 [2024-11-18 13:27:37.167354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.128 pt2 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.128 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.387 malloc3 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.387 [2024-11-18 13:27:37.235042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:07.387 [2024-11-18 13:27:37.235177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.387 [2024-11-18 13:27:37.235216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:07.387 [2024-11-18 13:27:37.235267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.387 [2024-11-18 13:27:37.237451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.387 [2024-11-18 13:27:37.237526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:07.387 pt3 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:07.387 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.388 malloc4 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.388 [2024-11-18 13:27:37.289898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.388 [2024-11-18 13:27:37.290029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.388 [2024-11-18 13:27:37.290075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:07.388 [2024-11-18 13:27:37.290115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.388 [2024-11-18 13:27:37.292227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.388 [2024-11-18 13:27:37.292295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.388 pt4 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.388 [2024-11-18 13:27:37.305895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.388 [2024-11-18 13:27:37.307759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.388 [2024-11-18 13:27:37.307865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.388 [2024-11-18 13:27:37.307944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.388 [2024-11-18 13:27:37.308170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:07.388 [2024-11-18 13:27:37.308216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.388 [2024-11-18 13:27:37.308478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.388 [2024-11-18 13:27:37.308671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:07.388 [2024-11-18 13:27:37.308714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:07.388 [2024-11-18 13:27:37.308885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.388 "name": "raid_bdev1", 00:11:07.388 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:07.388 "strip_size_kb": 64, 00:11:07.388 "state": "online", 00:11:07.388 "raid_level": "raid0", 00:11:07.388 "superblock": true, 00:11:07.388 "num_base_bdevs": 4, 00:11:07.388 "num_base_bdevs_discovered": 4, 00:11:07.388 "num_base_bdevs_operational": 4, 00:11:07.388 "base_bdevs_list": [ 00:11:07.388 { 00:11:07.388 "name": "pt1", 00:11:07.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.388 "is_configured": true, 00:11:07.388 "data_offset": 2048, 00:11:07.388 "data_size": 63488 00:11:07.388 }, 00:11:07.388 { 00:11:07.388 "name": "pt2", 00:11:07.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.388 "is_configured": true, 00:11:07.388 "data_offset": 2048, 00:11:07.388 "data_size": 63488 00:11:07.388 }, 00:11:07.388 { 00:11:07.388 "name": "pt3", 00:11:07.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.388 "is_configured": true, 00:11:07.388 "data_offset": 2048, 00:11:07.388 "data_size": 63488 00:11:07.388 }, 00:11:07.388 { 00:11:07.388 "name": "pt4", 00:11:07.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.388 "is_configured": true, 00:11:07.388 "data_offset": 2048, 00:11:07.388 "data_size": 63488 00:11:07.388 } 00:11:07.388 ] 00:11:07.388 }' 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.388 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.956 [2024-11-18 13:27:37.781410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.956 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.956 "name": "raid_bdev1", 00:11:07.956 "aliases": [ 00:11:07.956 "81f70068-3519-4a35-80af-57e211a84f03" 00:11:07.956 ], 00:11:07.956 "product_name": "Raid Volume", 00:11:07.956 "block_size": 512, 00:11:07.956 "num_blocks": 253952, 00:11:07.956 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:07.956 "assigned_rate_limits": { 00:11:07.956 "rw_ios_per_sec": 0, 00:11:07.956 "rw_mbytes_per_sec": 0, 00:11:07.956 "r_mbytes_per_sec": 0, 00:11:07.956 "w_mbytes_per_sec": 0 00:11:07.956 }, 00:11:07.956 "claimed": false, 00:11:07.956 "zoned": false, 00:11:07.956 "supported_io_types": { 00:11:07.956 "read": true, 00:11:07.956 "write": true, 00:11:07.956 "unmap": true, 00:11:07.956 "flush": true, 00:11:07.956 "reset": true, 00:11:07.956 "nvme_admin": false, 00:11:07.956 "nvme_io": false, 00:11:07.956 "nvme_io_md": false, 00:11:07.956 "write_zeroes": true, 00:11:07.956 "zcopy": false, 00:11:07.956 "get_zone_info": false, 00:11:07.956 "zone_management": false, 00:11:07.956 "zone_append": false, 00:11:07.956 "compare": false, 00:11:07.956 "compare_and_write": false, 00:11:07.956 "abort": false, 00:11:07.956 "seek_hole": false, 00:11:07.956 "seek_data": false, 00:11:07.956 "copy": false, 00:11:07.956 "nvme_iov_md": false 00:11:07.956 }, 00:11:07.956 "memory_domains": [ 00:11:07.956 { 00:11:07.956 "dma_device_id": "system", 00:11:07.956 "dma_device_type": 1 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.956 "dma_device_type": 2 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "system", 00:11:07.956 "dma_device_type": 1 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.956 "dma_device_type": 2 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "system", 00:11:07.956 "dma_device_type": 1 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.956 "dma_device_type": 2 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "system", 00:11:07.956 "dma_device_type": 1 00:11:07.956 }, 00:11:07.956 { 00:11:07.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.956 "dma_device_type": 2 00:11:07.956 } 00:11:07.956 ], 00:11:07.956 "driver_specific": { 00:11:07.956 "raid": { 00:11:07.956 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:07.956 "strip_size_kb": 64, 00:11:07.956 "state": "online", 00:11:07.956 "raid_level": "raid0", 00:11:07.957 "superblock": true, 00:11:07.957 "num_base_bdevs": 4, 00:11:07.957 "num_base_bdevs_discovered": 4, 00:11:07.957 "num_base_bdevs_operational": 4, 00:11:07.957 "base_bdevs_list": [ 00:11:07.957 { 00:11:07.957 "name": "pt1", 00:11:07.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.957 "is_configured": true, 00:11:07.957 "data_offset": 2048, 00:11:07.957 "data_size": 63488 00:11:07.957 }, 00:11:07.957 { 00:11:07.957 "name": "pt2", 00:11:07.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.957 "is_configured": true, 00:11:07.957 "data_offset": 2048, 00:11:07.957 "data_size": 63488 00:11:07.957 }, 00:11:07.957 { 00:11:07.957 "name": "pt3", 00:11:07.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.957 "is_configured": true, 00:11:07.957 "data_offset": 2048, 00:11:07.957 "data_size": 63488 00:11:07.957 }, 00:11:07.957 { 00:11:07.957 "name": "pt4", 00:11:07.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.957 "is_configured": true, 00:11:07.957 "data_offset": 2048, 00:11:07.957 "data_size": 63488 00:11:07.957 } 00:11:07.957 ] 00:11:07.957 } 00:11:07.957 } 00:11:07.957 }' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.957 pt2 00:11:07.957 pt3 00:11:07.957 pt4' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 13:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.957 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.957 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.218 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 [2024-11-18 13:27:38.128704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81f70068-3519-4a35-80af-57e211a84f03 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81f70068-3519-4a35-80af-57e211a84f03 ']' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 [2024-11-18 13:27:38.172349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.219 [2024-11-18 13:27:38.172373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.219 [2024-11-18 13:27:38.172446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.219 [2024-11-18 13:27:38.172512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.219 [2024-11-18 13:27:38.172526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.219 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.485 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.486 [2024-11-18 13:27:38.344100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.486 [2024-11-18 13:27:38.346067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.486 [2024-11-18 13:27:38.346115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:08.486 [2024-11-18 13:27:38.346158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:08.486 [2024-11-18 13:27:38.346209] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.486 [2024-11-18 13:27:38.346254] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.486 [2024-11-18 13:27:38.346273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:08.486 [2024-11-18 13:27:38.346290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:08.486 [2024-11-18 13:27:38.346303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.486 [2024-11-18 13:27:38.346314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:08.486 request: 00:11:08.486 { 00:11:08.486 "name": "raid_bdev1", 00:11:08.486 "raid_level": "raid0", 00:11:08.486 "base_bdevs": [ 00:11:08.486 "malloc1", 00:11:08.486 "malloc2", 00:11:08.486 "malloc3", 00:11:08.486 "malloc4" 00:11:08.486 ], 00:11:08.486 "strip_size_kb": 64, 00:11:08.486 "superblock": false, 00:11:08.486 "method": "bdev_raid_create", 00:11:08.486 "req_id": 1 00:11:08.486 } 00:11:08.486 Got JSON-RPC error response 00:11:08.486 response: 00:11:08.486 { 00:11:08.486 "code": -17, 00:11:08.486 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.486 } 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.486 [2024-11-18 13:27:38.407963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.486 [2024-11-18 13:27:38.408014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.486 [2024-11-18 13:27:38.408030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.486 [2024-11-18 13:27:38.408040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.486 [2024-11-18 13:27:38.410159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.486 [2024-11-18 13:27:38.410186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.486 [2024-11-18 13:27:38.410258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:08.486 [2024-11-18 13:27:38.410319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.486 pt1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.486 "name": "raid_bdev1", 00:11:08.486 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:08.486 "strip_size_kb": 64, 00:11:08.486 "state": "configuring", 00:11:08.486 "raid_level": "raid0", 00:11:08.486 "superblock": true, 00:11:08.486 "num_base_bdevs": 4, 00:11:08.486 "num_base_bdevs_discovered": 1, 00:11:08.486 "num_base_bdevs_operational": 4, 00:11:08.486 "base_bdevs_list": [ 00:11:08.486 { 00:11:08.486 "name": "pt1", 00:11:08.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.486 "is_configured": true, 00:11:08.486 "data_offset": 2048, 00:11:08.486 "data_size": 63488 00:11:08.486 }, 00:11:08.486 { 00:11:08.486 "name": null, 00:11:08.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.486 "is_configured": false, 00:11:08.486 "data_offset": 2048, 00:11:08.486 "data_size": 63488 00:11:08.486 }, 00:11:08.486 { 00:11:08.486 "name": null, 00:11:08.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.486 "is_configured": false, 00:11:08.486 "data_offset": 2048, 00:11:08.486 "data_size": 63488 00:11:08.486 }, 00:11:08.486 { 00:11:08.486 "name": null, 00:11:08.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.486 "is_configured": false, 00:11:08.486 "data_offset": 2048, 00:11:08.486 "data_size": 63488 00:11:08.486 } 00:11:08.486 ] 00:11:08.486 }' 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.486 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 [2024-11-18 13:27:38.891196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.056 [2024-11-18 13:27:38.891278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.056 [2024-11-18 13:27:38.891299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:09.056 [2024-11-18 13:27:38.891311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.056 [2024-11-18 13:27:38.891759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.056 [2024-11-18 13:27:38.891780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.056 [2024-11-18 13:27:38.891874] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.056 [2024-11-18 13:27:38.891898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.056 pt2 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 [2024-11-18 13:27:38.903142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.056 "name": "raid_bdev1", 00:11:09.056 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:09.056 "strip_size_kb": 64, 00:11:09.056 "state": "configuring", 00:11:09.056 "raid_level": "raid0", 00:11:09.056 "superblock": true, 00:11:09.056 "num_base_bdevs": 4, 00:11:09.056 "num_base_bdevs_discovered": 1, 00:11:09.056 "num_base_bdevs_operational": 4, 00:11:09.056 "base_bdevs_list": [ 00:11:09.056 { 00:11:09.056 "name": "pt1", 00:11:09.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.056 "is_configured": true, 00:11:09.056 "data_offset": 2048, 00:11:09.056 "data_size": 63488 00:11:09.056 }, 00:11:09.056 { 00:11:09.056 "name": null, 00:11:09.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.056 "is_configured": false, 00:11:09.056 "data_offset": 0, 00:11:09.056 "data_size": 63488 00:11:09.056 }, 00:11:09.056 { 00:11:09.056 "name": null, 00:11:09.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.056 "is_configured": false, 00:11:09.056 "data_offset": 2048, 00:11:09.056 "data_size": 63488 00:11:09.056 }, 00:11:09.056 { 00:11:09.056 "name": null, 00:11:09.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.056 "is_configured": false, 00:11:09.056 "data_offset": 2048, 00:11:09.056 "data_size": 63488 00:11:09.056 } 00:11:09.056 ] 00:11:09.056 }' 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.056 13:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.324 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:09.324 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.324 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.324 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.324 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.324 [2024-11-18 13:27:39.334443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.325 [2024-11-18 13:27:39.334591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.325 [2024-11-18 13:27:39.334631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:09.325 [2024-11-18 13:27:39.334662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.325 [2024-11-18 13:27:39.335109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.325 [2024-11-18 13:27:39.335188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.325 [2024-11-18 13:27:39.335299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.325 [2024-11-18 13:27:39.335350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.325 pt2 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.325 [2024-11-18 13:27:39.346367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.325 [2024-11-18 13:27:39.346454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.325 [2024-11-18 13:27:39.346491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:09.325 [2024-11-18 13:27:39.346527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.325 [2024-11-18 13:27:39.346916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.325 [2024-11-18 13:27:39.346971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.325 [2024-11-18 13:27:39.347059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:09.325 [2024-11-18 13:27:39.347105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.325 pt3 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.325 [2024-11-18 13:27:39.358324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:09.325 [2024-11-18 13:27:39.358370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.325 [2024-11-18 13:27:39.358387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:09.325 [2024-11-18 13:27:39.358395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.325 [2024-11-18 13:27:39.358747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.325 [2024-11-18 13:27:39.358763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:09.325 [2024-11-18 13:27:39.358818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:09.325 [2024-11-18 13:27:39.358834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:09.325 [2024-11-18 13:27:39.358957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.325 [2024-11-18 13:27:39.358965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:09.325 [2024-11-18 13:27:39.359219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:09.325 [2024-11-18 13:27:39.359361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.325 [2024-11-18 13:27:39.359375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:09.325 [2024-11-18 13:27:39.359513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.325 pt4 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.325 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.326 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.326 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.326 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.586 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.586 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.586 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.586 "name": "raid_bdev1", 00:11:09.586 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:09.586 "strip_size_kb": 64, 00:11:09.586 "state": "online", 00:11:09.587 "raid_level": "raid0", 00:11:09.587 "superblock": true, 00:11:09.587 "num_base_bdevs": 4, 00:11:09.587 "num_base_bdevs_discovered": 4, 00:11:09.587 "num_base_bdevs_operational": 4, 00:11:09.587 "base_bdevs_list": [ 00:11:09.587 { 00:11:09.587 "name": "pt1", 00:11:09.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.587 "is_configured": true, 00:11:09.587 "data_offset": 2048, 00:11:09.587 "data_size": 63488 00:11:09.587 }, 00:11:09.587 { 00:11:09.587 "name": "pt2", 00:11:09.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.587 "is_configured": true, 00:11:09.587 "data_offset": 2048, 00:11:09.587 "data_size": 63488 00:11:09.587 }, 00:11:09.587 { 00:11:09.587 "name": "pt3", 00:11:09.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.587 "is_configured": true, 00:11:09.587 "data_offset": 2048, 00:11:09.587 "data_size": 63488 00:11:09.587 }, 00:11:09.587 { 00:11:09.587 "name": "pt4", 00:11:09.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.587 "is_configured": true, 00:11:09.587 "data_offset": 2048, 00:11:09.587 "data_size": 63488 00:11:09.587 } 00:11:09.587 ] 00:11:09.587 }' 00:11:09.587 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.587 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.846 [2024-11-18 13:27:39.789960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.846 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.846 "name": "raid_bdev1", 00:11:09.846 "aliases": [ 00:11:09.846 "81f70068-3519-4a35-80af-57e211a84f03" 00:11:09.846 ], 00:11:09.846 "product_name": "Raid Volume", 00:11:09.846 "block_size": 512, 00:11:09.846 "num_blocks": 253952, 00:11:09.846 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:09.846 "assigned_rate_limits": { 00:11:09.846 "rw_ios_per_sec": 0, 00:11:09.846 "rw_mbytes_per_sec": 0, 00:11:09.846 "r_mbytes_per_sec": 0, 00:11:09.846 "w_mbytes_per_sec": 0 00:11:09.846 }, 00:11:09.846 "claimed": false, 00:11:09.846 "zoned": false, 00:11:09.846 "supported_io_types": { 00:11:09.846 "read": true, 00:11:09.846 "write": true, 00:11:09.846 "unmap": true, 00:11:09.846 "flush": true, 00:11:09.846 "reset": true, 00:11:09.846 "nvme_admin": false, 00:11:09.846 "nvme_io": false, 00:11:09.846 "nvme_io_md": false, 00:11:09.846 "write_zeroes": true, 00:11:09.846 "zcopy": false, 00:11:09.846 "get_zone_info": false, 00:11:09.846 "zone_management": false, 00:11:09.846 "zone_append": false, 00:11:09.846 "compare": false, 00:11:09.846 "compare_and_write": false, 00:11:09.846 "abort": false, 00:11:09.846 "seek_hole": false, 00:11:09.846 "seek_data": false, 00:11:09.846 "copy": false, 00:11:09.846 "nvme_iov_md": false 00:11:09.846 }, 00:11:09.846 "memory_domains": [ 00:11:09.846 { 00:11:09.846 "dma_device_id": "system", 00:11:09.846 "dma_device_type": 1 00:11:09.846 }, 00:11:09.846 { 00:11:09.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.846 "dma_device_type": 2 00:11:09.846 }, 00:11:09.846 { 00:11:09.846 "dma_device_id": "system", 00:11:09.846 "dma_device_type": 1 00:11:09.846 }, 00:11:09.846 { 00:11:09.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.846 "dma_device_type": 2 00:11:09.846 }, 00:11:09.846 { 00:11:09.846 "dma_device_id": "system", 00:11:09.846 "dma_device_type": 1 00:11:09.846 }, 00:11:09.846 { 00:11:09.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.846 "dma_device_type": 2 00:11:09.847 }, 00:11:09.847 { 00:11:09.847 "dma_device_id": "system", 00:11:09.847 "dma_device_type": 1 00:11:09.847 }, 00:11:09.847 { 00:11:09.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.847 "dma_device_type": 2 00:11:09.847 } 00:11:09.847 ], 00:11:09.847 "driver_specific": { 00:11:09.847 "raid": { 00:11:09.847 "uuid": "81f70068-3519-4a35-80af-57e211a84f03", 00:11:09.847 "strip_size_kb": 64, 00:11:09.847 "state": "online", 00:11:09.847 "raid_level": "raid0", 00:11:09.847 "superblock": true, 00:11:09.847 "num_base_bdevs": 4, 00:11:09.847 "num_base_bdevs_discovered": 4, 00:11:09.847 "num_base_bdevs_operational": 4, 00:11:09.847 "base_bdevs_list": [ 00:11:09.847 { 00:11:09.847 "name": "pt1", 00:11:09.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.847 "is_configured": true, 00:11:09.847 "data_offset": 2048, 00:11:09.847 "data_size": 63488 00:11:09.847 }, 00:11:09.847 { 00:11:09.847 "name": "pt2", 00:11:09.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.847 "is_configured": true, 00:11:09.847 "data_offset": 2048, 00:11:09.847 "data_size": 63488 00:11:09.847 }, 00:11:09.847 { 00:11:09.847 "name": "pt3", 00:11:09.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.847 "is_configured": true, 00:11:09.847 "data_offset": 2048, 00:11:09.847 "data_size": 63488 00:11:09.847 }, 00:11:09.847 { 00:11:09.847 "name": "pt4", 00:11:09.847 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:09.847 "is_configured": true, 00:11:09.847 "data_offset": 2048, 00:11:09.847 "data_size": 63488 00:11:09.847 } 00:11:09.847 ] 00:11:09.847 } 00:11:09.847 } 00:11:09.847 }' 00:11:09.847 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.847 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.847 pt2 00:11:09.847 pt3 00:11:09.847 pt4' 00:11:09.847 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.106 13:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.106 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.106 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.106 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:10.107 [2024-11-18 13:27:40.093353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81f70068-3519-4a35-80af-57e211a84f03 '!=' 81f70068-3519-4a35-80af-57e211a84f03 ']' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70728 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70728 ']' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70728 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.107 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70728 00:11:10.366 killing process with pid 70728 00:11:10.366 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.366 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.366 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70728' 00:11:10.366 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70728 00:11:10.366 [2024-11-18 13:27:40.185072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.366 [2024-11-18 13:27:40.185173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.366 [2024-11-18 13:27:40.185247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.366 [2024-11-18 13:27:40.185257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:10.366 13:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70728 00:11:10.625 [2024-11-18 13:27:40.587480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.004 13:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:12.004 00:11:12.004 real 0m5.576s 00:11:12.004 user 0m7.974s 00:11:12.004 sys 0m0.981s 00:11:12.004 13:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.004 ************************************ 00:11:12.004 END TEST raid_superblock_test 00:11:12.004 ************************************ 00:11:12.004 13:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.004 13:27:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:12.004 13:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.004 13:27:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.004 13:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.004 ************************************ 00:11:12.004 START TEST raid_read_error_test 00:11:12.004 ************************************ 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RTZ6T1DrtD 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70997 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70997 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70997 ']' 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.004 13:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.005 [2024-11-18 13:27:41.880036] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:12.005 [2024-11-18 13:27:41.880289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70997 ] 00:11:12.282 [2024-11-18 13:27:42.057092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.282 [2024-11-18 13:27:42.170915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.540 [2024-11-18 13:27:42.373114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.540 [2024-11-18 13:27:42.373183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.798 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.798 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.798 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 BaseBdev1_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 true 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 [2024-11-18 13:27:42.743660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.799 [2024-11-18 13:27:42.743726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.799 [2024-11-18 13:27:42.743745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.799 [2024-11-18 13:27:42.743756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.799 [2024-11-18 13:27:42.745790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.799 [2024-11-18 13:27:42.745832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.799 BaseBdev1 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 BaseBdev2_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 true 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.799 [2024-11-18 13:27:42.807709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.799 [2024-11-18 13:27:42.807763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.799 [2024-11-18 13:27:42.807779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.799 [2024-11-18 13:27:42.807789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.799 [2024-11-18 13:27:42.809794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.799 [2024-11-18 13:27:42.809834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.799 BaseBdev2 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.799 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 BaseBdev3_malloc 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 true 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 [2024-11-18 13:27:42.883380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.059 [2024-11-18 13:27:42.883535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.059 [2024-11-18 13:27:42.883560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.059 [2024-11-18 13:27:42.883573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.059 [2024-11-18 13:27:42.885881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.059 [2024-11-18 13:27:42.885921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.059 BaseBdev3 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 BaseBdev4_malloc 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 true 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 [2024-11-18 13:27:42.952420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.059 [2024-11-18 13:27:42.952481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.059 [2024-11-18 13:27:42.952499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.059 [2024-11-18 13:27:42.952510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.059 [2024-11-18 13:27:42.954612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.059 [2024-11-18 13:27:42.954754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.059 BaseBdev4 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 [2024-11-18 13:27:42.964462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.059 [2024-11-18 13:27:42.966462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.059 [2024-11-18 13:27:42.966630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.059 [2024-11-18 13:27:42.966756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.059 [2024-11-18 13:27:42.967119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:13.059 [2024-11-18 13:27:42.967200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.059 [2024-11-18 13:27:42.967536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:13.059 [2024-11-18 13:27:42.967772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:13.059 [2024-11-18 13:27:42.967819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:13.059 [2024-11-18 13:27:42.968049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.059 13:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.059 13:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.059 "name": "raid_bdev1", 00:11:13.059 "uuid": "cb6bc6f7-6674-4515-b5b0-7734431f1b73", 00:11:13.059 "strip_size_kb": 64, 00:11:13.059 "state": "online", 00:11:13.059 "raid_level": "raid0", 00:11:13.059 "superblock": true, 00:11:13.059 "num_base_bdevs": 4, 00:11:13.059 "num_base_bdevs_discovered": 4, 00:11:13.059 "num_base_bdevs_operational": 4, 00:11:13.059 "base_bdevs_list": [ 00:11:13.059 { 00:11:13.059 "name": "BaseBdev1", 00:11:13.059 "uuid": "1cb7be23-5696-5954-a190-7e75e39ffb83", 00:11:13.059 "is_configured": true, 00:11:13.059 "data_offset": 2048, 00:11:13.060 "data_size": 63488 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "name": "BaseBdev2", 00:11:13.060 "uuid": "05857b8f-8218-50b0-b2d3-cd5b779fa039", 00:11:13.060 "is_configured": true, 00:11:13.060 "data_offset": 2048, 00:11:13.060 "data_size": 63488 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "name": "BaseBdev3", 00:11:13.060 "uuid": "a6a43f0a-d4a4-56aa-9904-c176157de530", 00:11:13.060 "is_configured": true, 00:11:13.060 "data_offset": 2048, 00:11:13.060 "data_size": 63488 00:11:13.060 }, 00:11:13.060 { 00:11:13.060 "name": "BaseBdev4", 00:11:13.060 "uuid": "793fb51b-c98a-5cad-a756-ac7ed75a2ab1", 00:11:13.060 "is_configured": true, 00:11:13.060 "data_offset": 2048, 00:11:13.060 "data_size": 63488 00:11:13.060 } 00:11:13.060 ] 00:11:13.060 }' 00:11:13.060 13:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.060 13:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.657 13:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.657 13:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.657 [2024-11-18 13:27:43.508762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.597 "name": "raid_bdev1", 00:11:14.597 "uuid": "cb6bc6f7-6674-4515-b5b0-7734431f1b73", 00:11:14.597 "strip_size_kb": 64, 00:11:14.597 "state": "online", 00:11:14.597 "raid_level": "raid0", 00:11:14.597 "superblock": true, 00:11:14.597 "num_base_bdevs": 4, 00:11:14.597 "num_base_bdevs_discovered": 4, 00:11:14.597 "num_base_bdevs_operational": 4, 00:11:14.597 "base_bdevs_list": [ 00:11:14.597 { 00:11:14.597 "name": "BaseBdev1", 00:11:14.597 "uuid": "1cb7be23-5696-5954-a190-7e75e39ffb83", 00:11:14.597 "is_configured": true, 00:11:14.597 "data_offset": 2048, 00:11:14.597 "data_size": 63488 00:11:14.597 }, 00:11:14.597 { 00:11:14.597 "name": "BaseBdev2", 00:11:14.597 "uuid": "05857b8f-8218-50b0-b2d3-cd5b779fa039", 00:11:14.597 "is_configured": true, 00:11:14.597 "data_offset": 2048, 00:11:14.597 "data_size": 63488 00:11:14.597 }, 00:11:14.597 { 00:11:14.597 "name": "BaseBdev3", 00:11:14.597 "uuid": "a6a43f0a-d4a4-56aa-9904-c176157de530", 00:11:14.597 "is_configured": true, 00:11:14.597 "data_offset": 2048, 00:11:14.597 "data_size": 63488 00:11:14.597 }, 00:11:14.597 { 00:11:14.597 "name": "BaseBdev4", 00:11:14.597 "uuid": "793fb51b-c98a-5cad-a756-ac7ed75a2ab1", 00:11:14.597 "is_configured": true, 00:11:14.597 "data_offset": 2048, 00:11:14.597 "data_size": 63488 00:11:14.597 } 00:11:14.597 ] 00:11:14.597 }' 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.597 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.857 [2024-11-18 13:27:44.898892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.857 [2024-11-18 13:27:44.899038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.857 [2024-11-18 13:27:44.901598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.857 [2024-11-18 13:27:44.901695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.857 [2024-11-18 13:27:44.901755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.857 [2024-11-18 13:27:44.901825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:14.857 { 00:11:14.857 "results": [ 00:11:14.857 { 00:11:14.857 "job": "raid_bdev1", 00:11:14.857 "core_mask": "0x1", 00:11:14.857 "workload": "randrw", 00:11:14.857 "percentage": 50, 00:11:14.857 "status": "finished", 00:11:14.857 "queue_depth": 1, 00:11:14.857 "io_size": 131072, 00:11:14.857 "runtime": 1.391219, 00:11:14.857 "iops": 16456.07197716535, 00:11:14.857 "mibps": 2057.0089971456687, 00:11:14.857 "io_failed": 1, 00:11:14.857 "io_timeout": 0, 00:11:14.857 "avg_latency_us": 84.7036174065961, 00:11:14.857 "min_latency_us": 24.258515283842794, 00:11:14.857 "max_latency_us": 1337.907423580786 00:11:14.857 } 00:11:14.857 ], 00:11:14.857 "core_count": 1 00:11:14.857 } 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70997 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70997 ']' 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70997 00:11:14.857 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70997 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70997' 00:11:15.116 killing process with pid 70997 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70997 00:11:15.116 [2024-11-18 13:27:44.953509] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.116 13:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70997 00:11:15.376 [2024-11-18 13:27:45.275632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RTZ6T1DrtD 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:16.758 00:11:16.758 real 0m4.671s 00:11:16.758 user 0m5.501s 00:11:16.758 sys 0m0.609s 00:11:16.758 ************************************ 00:11:16.758 END TEST raid_read_error_test 00:11:16.758 ************************************ 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.758 13:27:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.758 13:27:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:16.758 13:27:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.758 13:27:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.758 13:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.758 ************************************ 00:11:16.758 START TEST raid_write_error_test 00:11:16.758 ************************************ 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.syMe7V3kqF 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71138 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71138 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71138 ']' 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.758 13:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.758 [2024-11-18 13:27:46.619630] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:16.758 [2024-11-18 13:27:46.619744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:11:16.758 [2024-11-18 13:27:46.793934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.018 [2024-11-18 13:27:46.904109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.278 [2024-11-18 13:27:47.101325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.278 [2024-11-18 13:27:47.101481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 BaseBdev1_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 true 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 [2024-11-18 13:27:47.514776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.538 [2024-11-18 13:27:47.514843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.538 [2024-11-18 13:27:47.514861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:17.538 [2024-11-18 13:27:47.514872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.538 [2024-11-18 13:27:47.516897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.538 [2024-11-18 13:27:47.516941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.538 BaseBdev1 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 BaseBdev2_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 true 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.538 [2024-11-18 13:27:47.580972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.538 [2024-11-18 13:27:47.581032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.538 [2024-11-18 13:27:47.581049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.538 [2024-11-18 13:27:47.581060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.538 [2024-11-18 13:27:47.583119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.538 [2024-11-18 13:27:47.583174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.538 BaseBdev2 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.538 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 BaseBdev3_malloc 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 true 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 [2024-11-18 13:27:47.659872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:17.799 [2024-11-18 13:27:47.660025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.799 [2024-11-18 13:27:47.660046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:17.799 [2024-11-18 13:27:47.660057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.799 [2024-11-18 13:27:47.662224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.799 [2024-11-18 13:27:47.662265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:17.799 BaseBdev3 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 BaseBdev4_malloc 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 true 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 [2024-11-18 13:27:47.729012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:17.799 [2024-11-18 13:27:47.729155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.799 [2024-11-18 13:27:47.729193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:17.799 [2024-11-18 13:27:47.729223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.799 [2024-11-18 13:27:47.731329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.799 [2024-11-18 13:27:47.731411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:17.799 BaseBdev4 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 [2024-11-18 13:27:47.741053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.799 [2024-11-18 13:27:47.742951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.799 [2024-11-18 13:27:47.743032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.799 [2024-11-18 13:27:47.743098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.799 [2024-11-18 13:27:47.743339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:17.799 [2024-11-18 13:27:47.743364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.799 [2024-11-18 13:27:47.743599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:17.799 [2024-11-18 13:27:47.743766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:17.799 [2024-11-18 13:27:47.743777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:17.799 [2024-11-18 13:27:47.743925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.799 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.799 "name": "raid_bdev1", 00:11:17.799 "uuid": "8990b028-c5f0-43e7-bfe5-440d2837d5bb", 00:11:17.799 "strip_size_kb": 64, 00:11:17.799 "state": "online", 00:11:17.799 "raid_level": "raid0", 00:11:17.799 "superblock": true, 00:11:17.799 "num_base_bdevs": 4, 00:11:17.799 "num_base_bdevs_discovered": 4, 00:11:17.799 "num_base_bdevs_operational": 4, 00:11:17.799 "base_bdevs_list": [ 00:11:17.799 { 00:11:17.799 "name": "BaseBdev1", 00:11:17.799 "uuid": "fe615e15-a853-57d6-87ec-32655c0069a3", 00:11:17.799 "is_configured": true, 00:11:17.799 "data_offset": 2048, 00:11:17.799 "data_size": 63488 00:11:17.799 }, 00:11:17.799 { 00:11:17.799 "name": "BaseBdev2", 00:11:17.799 "uuid": "7e1cc946-2575-50c7-87dc-fc7ac1001d56", 00:11:17.799 "is_configured": true, 00:11:17.799 "data_offset": 2048, 00:11:17.799 "data_size": 63488 00:11:17.799 }, 00:11:17.799 { 00:11:17.799 "name": "BaseBdev3", 00:11:17.799 "uuid": "4eb5db9a-fa53-5659-be1b-1ece2b74e957", 00:11:17.799 "is_configured": true, 00:11:17.799 "data_offset": 2048, 00:11:17.799 "data_size": 63488 00:11:17.799 }, 00:11:17.799 { 00:11:17.799 "name": "BaseBdev4", 00:11:17.800 "uuid": "ec2c4034-cce4-5db5-a809-05ce33903b74", 00:11:17.800 "is_configured": true, 00:11:17.800 "data_offset": 2048, 00:11:17.800 "data_size": 63488 00:11:17.800 } 00:11:17.800 ] 00:11:17.800 }' 00:11:17.800 13:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.800 13:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.370 13:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:18.370 13:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:18.370 [2024-11-18 13:27:48.325317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.310 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.310 "name": "raid_bdev1", 00:11:19.310 "uuid": "8990b028-c5f0-43e7-bfe5-440d2837d5bb", 00:11:19.310 "strip_size_kb": 64, 00:11:19.310 "state": "online", 00:11:19.310 "raid_level": "raid0", 00:11:19.310 "superblock": true, 00:11:19.310 "num_base_bdevs": 4, 00:11:19.310 "num_base_bdevs_discovered": 4, 00:11:19.310 "num_base_bdevs_operational": 4, 00:11:19.310 "base_bdevs_list": [ 00:11:19.310 { 00:11:19.311 "name": "BaseBdev1", 00:11:19.311 "uuid": "fe615e15-a853-57d6-87ec-32655c0069a3", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": "BaseBdev2", 00:11:19.311 "uuid": "7e1cc946-2575-50c7-87dc-fc7ac1001d56", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": "BaseBdev3", 00:11:19.311 "uuid": "4eb5db9a-fa53-5659-be1b-1ece2b74e957", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 }, 00:11:19.311 { 00:11:19.311 "name": "BaseBdev4", 00:11:19.311 "uuid": "ec2c4034-cce4-5db5-a809-05ce33903b74", 00:11:19.311 "is_configured": true, 00:11:19.311 "data_offset": 2048, 00:11:19.311 "data_size": 63488 00:11:19.311 } 00:11:19.311 ] 00:11:19.311 }' 00:11:19.311 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.311 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.886 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.887 [2024-11-18 13:27:49.709420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.887 [2024-11-18 13:27:49.709468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.887 [2024-11-18 13:27:49.712255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.887 [2024-11-18 13:27:49.712315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.887 [2024-11-18 13:27:49.712373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.887 [2024-11-18 13:27:49.712386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:19.887 { 00:11:19.887 "results": [ 00:11:19.887 { 00:11:19.887 "job": "raid_bdev1", 00:11:19.887 "core_mask": "0x1", 00:11:19.887 "workload": "randrw", 00:11:19.887 "percentage": 50, 00:11:19.887 "status": "finished", 00:11:19.887 "queue_depth": 1, 00:11:19.887 "io_size": 131072, 00:11:19.887 "runtime": 1.384894, 00:11:19.887 "iops": 15895.801411515971, 00:11:19.887 "mibps": 1986.9751764394964, 00:11:19.887 "io_failed": 1, 00:11:19.887 "io_timeout": 0, 00:11:19.887 "avg_latency_us": 87.61320568449261, 00:11:19.887 "min_latency_us": 26.047161572052403, 00:11:19.887 "max_latency_us": 1387.989519650655 00:11:19.887 } 00:11:19.887 ], 00:11:19.887 "core_count": 1 00:11:19.887 } 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71138 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71138 ']' 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71138 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71138 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.887 killing process with pid 71138 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71138' 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71138 00:11:19.887 [2024-11-18 13:27:49.753208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.887 13:27:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71138 00:11:20.146 [2024-11-18 13:27:50.077896] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.syMe7V3kqF 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:21.526 ************************************ 00:11:21.526 END TEST raid_write_error_test 00:11:21.526 ************************************ 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:21.526 00:11:21.526 real 0m4.738s 00:11:21.526 user 0m5.620s 00:11:21.526 sys 0m0.596s 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.526 13:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.526 13:27:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:21.526 13:27:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:21.526 13:27:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.526 13:27:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.526 13:27:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.526 ************************************ 00:11:21.526 START TEST raid_state_function_test 00:11:21.526 ************************************ 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71282 00:11:21.526 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71282' 00:11:21.527 Process raid pid: 71282 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71282 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71282 ']' 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.527 13:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.527 [2024-11-18 13:27:51.437635] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:21.527 [2024-11-18 13:27:51.437781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.797 [2024-11-18 13:27:51.609780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.797 [2024-11-18 13:27:51.725139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.058 [2024-11-18 13:27:51.929280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.058 [2024-11-18 13:27:51.929318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.318 [2024-11-18 13:27:52.281811] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.318 [2024-11-18 13:27:52.281886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.318 [2024-11-18 13:27:52.281896] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.318 [2024-11-18 13:27:52.281906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.318 [2024-11-18 13:27:52.281912] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.318 [2024-11-18 13:27:52.281921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.318 [2024-11-18 13:27:52.281927] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.318 [2024-11-18 13:27:52.281936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.318 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.319 "name": "Existed_Raid", 00:11:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.319 "strip_size_kb": 64, 00:11:22.319 "state": "configuring", 00:11:22.319 "raid_level": "concat", 00:11:22.319 "superblock": false, 00:11:22.319 "num_base_bdevs": 4, 00:11:22.319 "num_base_bdevs_discovered": 0, 00:11:22.319 "num_base_bdevs_operational": 4, 00:11:22.319 "base_bdevs_list": [ 00:11:22.319 { 00:11:22.319 "name": "BaseBdev1", 00:11:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.319 "is_configured": false, 00:11:22.319 "data_offset": 0, 00:11:22.319 "data_size": 0 00:11:22.319 }, 00:11:22.319 { 00:11:22.319 "name": "BaseBdev2", 00:11:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.319 "is_configured": false, 00:11:22.319 "data_offset": 0, 00:11:22.319 "data_size": 0 00:11:22.319 }, 00:11:22.319 { 00:11:22.319 "name": "BaseBdev3", 00:11:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.319 "is_configured": false, 00:11:22.319 "data_offset": 0, 00:11:22.319 "data_size": 0 00:11:22.319 }, 00:11:22.319 { 00:11:22.319 "name": "BaseBdev4", 00:11:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.319 "is_configured": false, 00:11:22.319 "data_offset": 0, 00:11:22.319 "data_size": 0 00:11:22.319 } 00:11:22.319 ] 00:11:22.319 }' 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.319 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.889 [2024-11-18 13:27:52.764913] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.889 [2024-11-18 13:27:52.764961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.889 [2024-11-18 13:27:52.776852] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.889 [2024-11-18 13:27:52.776944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.889 [2024-11-18 13:27:52.776972] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.889 [2024-11-18 13:27:52.776994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.889 [2024-11-18 13:27:52.777012] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.889 [2024-11-18 13:27:52.777032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.889 [2024-11-18 13:27:52.777050] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.889 [2024-11-18 13:27:52.777070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.889 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.889 [2024-11-18 13:27:52.829967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.889 BaseBdev1 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.890 [ 00:11:22.890 { 00:11:22.890 "name": "BaseBdev1", 00:11:22.890 "aliases": [ 00:11:22.890 "fa14f2bb-9878-441b-9ded-f0938fa78882" 00:11:22.890 ], 00:11:22.890 "product_name": "Malloc disk", 00:11:22.890 "block_size": 512, 00:11:22.890 "num_blocks": 65536, 00:11:22.890 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:22.890 "assigned_rate_limits": { 00:11:22.890 "rw_ios_per_sec": 0, 00:11:22.890 "rw_mbytes_per_sec": 0, 00:11:22.890 "r_mbytes_per_sec": 0, 00:11:22.890 "w_mbytes_per_sec": 0 00:11:22.890 }, 00:11:22.890 "claimed": true, 00:11:22.890 "claim_type": "exclusive_write", 00:11:22.890 "zoned": false, 00:11:22.890 "supported_io_types": { 00:11:22.890 "read": true, 00:11:22.890 "write": true, 00:11:22.890 "unmap": true, 00:11:22.890 "flush": true, 00:11:22.890 "reset": true, 00:11:22.890 "nvme_admin": false, 00:11:22.890 "nvme_io": false, 00:11:22.890 "nvme_io_md": false, 00:11:22.890 "write_zeroes": true, 00:11:22.890 "zcopy": true, 00:11:22.890 "get_zone_info": false, 00:11:22.890 "zone_management": false, 00:11:22.890 "zone_append": false, 00:11:22.890 "compare": false, 00:11:22.890 "compare_and_write": false, 00:11:22.890 "abort": true, 00:11:22.890 "seek_hole": false, 00:11:22.890 "seek_data": false, 00:11:22.890 "copy": true, 00:11:22.890 "nvme_iov_md": false 00:11:22.890 }, 00:11:22.890 "memory_domains": [ 00:11:22.890 { 00:11:22.890 "dma_device_id": "system", 00:11:22.890 "dma_device_type": 1 00:11:22.890 }, 00:11:22.890 { 00:11:22.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.890 "dma_device_type": 2 00:11:22.890 } 00:11:22.890 ], 00:11:22.890 "driver_specific": {} 00:11:22.890 } 00:11:22.890 ] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.890 "name": "Existed_Raid", 00:11:22.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.890 "strip_size_kb": 64, 00:11:22.890 "state": "configuring", 00:11:22.890 "raid_level": "concat", 00:11:22.890 "superblock": false, 00:11:22.890 "num_base_bdevs": 4, 00:11:22.890 "num_base_bdevs_discovered": 1, 00:11:22.890 "num_base_bdevs_operational": 4, 00:11:22.890 "base_bdevs_list": [ 00:11:22.890 { 00:11:22.890 "name": "BaseBdev1", 00:11:22.890 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:22.890 "is_configured": true, 00:11:22.890 "data_offset": 0, 00:11:22.890 "data_size": 65536 00:11:22.890 }, 00:11:22.890 { 00:11:22.890 "name": "BaseBdev2", 00:11:22.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.890 "is_configured": false, 00:11:22.890 "data_offset": 0, 00:11:22.890 "data_size": 0 00:11:22.890 }, 00:11:22.890 { 00:11:22.890 "name": "BaseBdev3", 00:11:22.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.890 "is_configured": false, 00:11:22.890 "data_offset": 0, 00:11:22.890 "data_size": 0 00:11:22.890 }, 00:11:22.890 { 00:11:22.890 "name": "BaseBdev4", 00:11:22.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.890 "is_configured": false, 00:11:22.890 "data_offset": 0, 00:11:22.890 "data_size": 0 00:11:22.890 } 00:11:22.890 ] 00:11:22.890 }' 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.890 13:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.459 [2024-11-18 13:27:53.349159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.459 [2024-11-18 13:27:53.349231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.459 [2024-11-18 13:27:53.361163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.459 [2024-11-18 13:27:53.362972] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.459 [2024-11-18 13:27:53.363010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.459 [2024-11-18 13:27:53.363020] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.459 [2024-11-18 13:27:53.363030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.459 [2024-11-18 13:27:53.363036] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.459 [2024-11-18 13:27:53.363045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.459 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.460 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.460 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.460 "name": "Existed_Raid", 00:11:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.460 "strip_size_kb": 64, 00:11:23.460 "state": "configuring", 00:11:23.460 "raid_level": "concat", 00:11:23.460 "superblock": false, 00:11:23.460 "num_base_bdevs": 4, 00:11:23.460 "num_base_bdevs_discovered": 1, 00:11:23.460 "num_base_bdevs_operational": 4, 00:11:23.460 "base_bdevs_list": [ 00:11:23.460 { 00:11:23.460 "name": "BaseBdev1", 00:11:23.460 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:23.460 "is_configured": true, 00:11:23.460 "data_offset": 0, 00:11:23.460 "data_size": 65536 00:11:23.460 }, 00:11:23.460 { 00:11:23.460 "name": "BaseBdev2", 00:11:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.460 "is_configured": false, 00:11:23.460 "data_offset": 0, 00:11:23.460 "data_size": 0 00:11:23.460 }, 00:11:23.460 { 00:11:23.460 "name": "BaseBdev3", 00:11:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.460 "is_configured": false, 00:11:23.460 "data_offset": 0, 00:11:23.460 "data_size": 0 00:11:23.460 }, 00:11:23.460 { 00:11:23.460 "name": "BaseBdev4", 00:11:23.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.460 "is_configured": false, 00:11:23.460 "data_offset": 0, 00:11:23.460 "data_size": 0 00:11:23.460 } 00:11:23.460 ] 00:11:23.460 }' 00:11:23.460 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.460 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.029 [2024-11-18 13:27:53.845410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.029 BaseBdev2 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.029 [ 00:11:24.029 { 00:11:24.029 "name": "BaseBdev2", 00:11:24.029 "aliases": [ 00:11:24.029 "a9cf05f8-186d-477d-914c-69b8e19b00e7" 00:11:24.029 ], 00:11:24.029 "product_name": "Malloc disk", 00:11:24.029 "block_size": 512, 00:11:24.029 "num_blocks": 65536, 00:11:24.029 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:24.029 "assigned_rate_limits": { 00:11:24.029 "rw_ios_per_sec": 0, 00:11:24.029 "rw_mbytes_per_sec": 0, 00:11:24.029 "r_mbytes_per_sec": 0, 00:11:24.029 "w_mbytes_per_sec": 0 00:11:24.029 }, 00:11:24.029 "claimed": true, 00:11:24.029 "claim_type": "exclusive_write", 00:11:24.029 "zoned": false, 00:11:24.029 "supported_io_types": { 00:11:24.029 "read": true, 00:11:24.029 "write": true, 00:11:24.029 "unmap": true, 00:11:24.029 "flush": true, 00:11:24.029 "reset": true, 00:11:24.029 "nvme_admin": false, 00:11:24.029 "nvme_io": false, 00:11:24.029 "nvme_io_md": false, 00:11:24.029 "write_zeroes": true, 00:11:24.029 "zcopy": true, 00:11:24.029 "get_zone_info": false, 00:11:24.029 "zone_management": false, 00:11:24.029 "zone_append": false, 00:11:24.029 "compare": false, 00:11:24.029 "compare_and_write": false, 00:11:24.029 "abort": true, 00:11:24.029 "seek_hole": false, 00:11:24.029 "seek_data": false, 00:11:24.029 "copy": true, 00:11:24.029 "nvme_iov_md": false 00:11:24.029 }, 00:11:24.029 "memory_domains": [ 00:11:24.029 { 00:11:24.029 "dma_device_id": "system", 00:11:24.029 "dma_device_type": 1 00:11:24.029 }, 00:11:24.029 { 00:11:24.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.029 "dma_device_type": 2 00:11:24.029 } 00:11:24.029 ], 00:11:24.029 "driver_specific": {} 00:11:24.029 } 00:11:24.029 ] 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.029 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.030 "name": "Existed_Raid", 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.030 "strip_size_kb": 64, 00:11:24.030 "state": "configuring", 00:11:24.030 "raid_level": "concat", 00:11:24.030 "superblock": false, 00:11:24.030 "num_base_bdevs": 4, 00:11:24.030 "num_base_bdevs_discovered": 2, 00:11:24.030 "num_base_bdevs_operational": 4, 00:11:24.030 "base_bdevs_list": [ 00:11:24.030 { 00:11:24.030 "name": "BaseBdev1", 00:11:24.030 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:24.030 "is_configured": true, 00:11:24.030 "data_offset": 0, 00:11:24.030 "data_size": 65536 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": "BaseBdev2", 00:11:24.030 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:24.030 "is_configured": true, 00:11:24.030 "data_offset": 0, 00:11:24.030 "data_size": 65536 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": "BaseBdev3", 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.030 "is_configured": false, 00:11:24.030 "data_offset": 0, 00:11:24.030 "data_size": 0 00:11:24.030 }, 00:11:24.030 { 00:11:24.030 "name": "BaseBdev4", 00:11:24.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.030 "is_configured": false, 00:11:24.030 "data_offset": 0, 00:11:24.030 "data_size": 0 00:11:24.030 } 00:11:24.030 ] 00:11:24.030 }' 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.030 13:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.289 [2024-11-18 13:27:54.323514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.289 BaseBdev3 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.289 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.548 [ 00:11:24.548 { 00:11:24.548 "name": "BaseBdev3", 00:11:24.548 "aliases": [ 00:11:24.548 "9948bf49-8f4b-49d9-9124-285a59c96abe" 00:11:24.548 ], 00:11:24.548 "product_name": "Malloc disk", 00:11:24.548 "block_size": 512, 00:11:24.548 "num_blocks": 65536, 00:11:24.548 "uuid": "9948bf49-8f4b-49d9-9124-285a59c96abe", 00:11:24.548 "assigned_rate_limits": { 00:11:24.548 "rw_ios_per_sec": 0, 00:11:24.548 "rw_mbytes_per_sec": 0, 00:11:24.548 "r_mbytes_per_sec": 0, 00:11:24.548 "w_mbytes_per_sec": 0 00:11:24.548 }, 00:11:24.548 "claimed": true, 00:11:24.548 "claim_type": "exclusive_write", 00:11:24.548 "zoned": false, 00:11:24.548 "supported_io_types": { 00:11:24.548 "read": true, 00:11:24.548 "write": true, 00:11:24.548 "unmap": true, 00:11:24.548 "flush": true, 00:11:24.548 "reset": true, 00:11:24.548 "nvme_admin": false, 00:11:24.548 "nvme_io": false, 00:11:24.548 "nvme_io_md": false, 00:11:24.548 "write_zeroes": true, 00:11:24.548 "zcopy": true, 00:11:24.548 "get_zone_info": false, 00:11:24.548 "zone_management": false, 00:11:24.548 "zone_append": false, 00:11:24.548 "compare": false, 00:11:24.548 "compare_and_write": false, 00:11:24.548 "abort": true, 00:11:24.548 "seek_hole": false, 00:11:24.548 "seek_data": false, 00:11:24.548 "copy": true, 00:11:24.548 "nvme_iov_md": false 00:11:24.548 }, 00:11:24.548 "memory_domains": [ 00:11:24.548 { 00:11:24.548 "dma_device_id": "system", 00:11:24.548 "dma_device_type": 1 00:11:24.548 }, 00:11:24.548 { 00:11:24.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.548 "dma_device_type": 2 00:11:24.548 } 00:11:24.548 ], 00:11:24.548 "driver_specific": {} 00:11:24.548 } 00:11:24.548 ] 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.548 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.548 "name": "Existed_Raid", 00:11:24.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.548 "strip_size_kb": 64, 00:11:24.548 "state": "configuring", 00:11:24.548 "raid_level": "concat", 00:11:24.548 "superblock": false, 00:11:24.548 "num_base_bdevs": 4, 00:11:24.548 "num_base_bdevs_discovered": 3, 00:11:24.548 "num_base_bdevs_operational": 4, 00:11:24.548 "base_bdevs_list": [ 00:11:24.548 { 00:11:24.548 "name": "BaseBdev1", 00:11:24.548 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:24.548 "is_configured": true, 00:11:24.548 "data_offset": 0, 00:11:24.548 "data_size": 65536 00:11:24.548 }, 00:11:24.548 { 00:11:24.548 "name": "BaseBdev2", 00:11:24.548 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:24.548 "is_configured": true, 00:11:24.549 "data_offset": 0, 00:11:24.549 "data_size": 65536 00:11:24.549 }, 00:11:24.549 { 00:11:24.549 "name": "BaseBdev3", 00:11:24.549 "uuid": "9948bf49-8f4b-49d9-9124-285a59c96abe", 00:11:24.549 "is_configured": true, 00:11:24.549 "data_offset": 0, 00:11:24.549 "data_size": 65536 00:11:24.549 }, 00:11:24.549 { 00:11:24.549 "name": "BaseBdev4", 00:11:24.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.549 "is_configured": false, 00:11:24.549 "data_offset": 0, 00:11:24.549 "data_size": 0 00:11:24.549 } 00:11:24.549 ] 00:11:24.549 }' 00:11:24.549 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.549 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 [2024-11-18 13:27:54.801165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.809 [2024-11-18 13:27:54.801218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.809 [2024-11-18 13:27:54.801227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:24.809 [2024-11-18 13:27:54.801493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.809 [2024-11-18 13:27:54.801677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.809 [2024-11-18 13:27:54.801697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.809 [2024-11-18 13:27:54.801970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.809 BaseBdev4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.809 [ 00:11:24.809 { 00:11:24.809 "name": "BaseBdev4", 00:11:24.809 "aliases": [ 00:11:24.809 "8f34f279-5f1e-455a-a9ed-9d939f5350eb" 00:11:24.809 ], 00:11:24.809 "product_name": "Malloc disk", 00:11:24.809 "block_size": 512, 00:11:24.809 "num_blocks": 65536, 00:11:24.809 "uuid": "8f34f279-5f1e-455a-a9ed-9d939f5350eb", 00:11:24.809 "assigned_rate_limits": { 00:11:24.809 "rw_ios_per_sec": 0, 00:11:24.809 "rw_mbytes_per_sec": 0, 00:11:24.809 "r_mbytes_per_sec": 0, 00:11:24.809 "w_mbytes_per_sec": 0 00:11:24.809 }, 00:11:24.809 "claimed": true, 00:11:24.809 "claim_type": "exclusive_write", 00:11:24.809 "zoned": false, 00:11:24.809 "supported_io_types": { 00:11:24.809 "read": true, 00:11:24.809 "write": true, 00:11:24.809 "unmap": true, 00:11:24.809 "flush": true, 00:11:24.809 "reset": true, 00:11:24.809 "nvme_admin": false, 00:11:24.809 "nvme_io": false, 00:11:24.809 "nvme_io_md": false, 00:11:24.809 "write_zeroes": true, 00:11:24.809 "zcopy": true, 00:11:24.809 "get_zone_info": false, 00:11:24.809 "zone_management": false, 00:11:24.809 "zone_append": false, 00:11:24.809 "compare": false, 00:11:24.809 "compare_and_write": false, 00:11:24.809 "abort": true, 00:11:24.809 "seek_hole": false, 00:11:24.809 "seek_data": false, 00:11:24.809 "copy": true, 00:11:24.809 "nvme_iov_md": false 00:11:24.809 }, 00:11:24.809 "memory_domains": [ 00:11:24.809 { 00:11:24.809 "dma_device_id": "system", 00:11:24.809 "dma_device_type": 1 00:11:24.809 }, 00:11:24.809 { 00:11:24.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.809 "dma_device_type": 2 00:11:24.809 } 00:11:24.809 ], 00:11:24.809 "driver_specific": {} 00:11:24.809 } 00:11:24.809 ] 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.809 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.069 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.069 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.069 "name": "Existed_Raid", 00:11:25.069 "uuid": "a85da8fd-0559-4917-8425-eee4f42891e9", 00:11:25.069 "strip_size_kb": 64, 00:11:25.069 "state": "online", 00:11:25.069 "raid_level": "concat", 00:11:25.069 "superblock": false, 00:11:25.069 "num_base_bdevs": 4, 00:11:25.069 "num_base_bdevs_discovered": 4, 00:11:25.069 "num_base_bdevs_operational": 4, 00:11:25.069 "base_bdevs_list": [ 00:11:25.069 { 00:11:25.069 "name": "BaseBdev1", 00:11:25.069 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:25.069 "is_configured": true, 00:11:25.069 "data_offset": 0, 00:11:25.069 "data_size": 65536 00:11:25.069 }, 00:11:25.069 { 00:11:25.069 "name": "BaseBdev2", 00:11:25.069 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:25.069 "is_configured": true, 00:11:25.069 "data_offset": 0, 00:11:25.069 "data_size": 65536 00:11:25.069 }, 00:11:25.069 { 00:11:25.069 "name": "BaseBdev3", 00:11:25.069 "uuid": "9948bf49-8f4b-49d9-9124-285a59c96abe", 00:11:25.069 "is_configured": true, 00:11:25.069 "data_offset": 0, 00:11:25.069 "data_size": 65536 00:11:25.069 }, 00:11:25.069 { 00:11:25.069 "name": "BaseBdev4", 00:11:25.069 "uuid": "8f34f279-5f1e-455a-a9ed-9d939f5350eb", 00:11:25.069 "is_configured": true, 00:11:25.070 "data_offset": 0, 00:11:25.070 "data_size": 65536 00:11:25.070 } 00:11:25.070 ] 00:11:25.070 }' 00:11:25.070 13:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.070 13:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.329 [2024-11-18 13:27:55.292759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.329 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.329 "name": "Existed_Raid", 00:11:25.329 "aliases": [ 00:11:25.329 "a85da8fd-0559-4917-8425-eee4f42891e9" 00:11:25.329 ], 00:11:25.329 "product_name": "Raid Volume", 00:11:25.329 "block_size": 512, 00:11:25.329 "num_blocks": 262144, 00:11:25.329 "uuid": "a85da8fd-0559-4917-8425-eee4f42891e9", 00:11:25.329 "assigned_rate_limits": { 00:11:25.329 "rw_ios_per_sec": 0, 00:11:25.329 "rw_mbytes_per_sec": 0, 00:11:25.329 "r_mbytes_per_sec": 0, 00:11:25.329 "w_mbytes_per_sec": 0 00:11:25.329 }, 00:11:25.329 "claimed": false, 00:11:25.329 "zoned": false, 00:11:25.329 "supported_io_types": { 00:11:25.329 "read": true, 00:11:25.329 "write": true, 00:11:25.329 "unmap": true, 00:11:25.329 "flush": true, 00:11:25.329 "reset": true, 00:11:25.329 "nvme_admin": false, 00:11:25.329 "nvme_io": false, 00:11:25.329 "nvme_io_md": false, 00:11:25.329 "write_zeroes": true, 00:11:25.329 "zcopy": false, 00:11:25.329 "get_zone_info": false, 00:11:25.329 "zone_management": false, 00:11:25.329 "zone_append": false, 00:11:25.329 "compare": false, 00:11:25.329 "compare_and_write": false, 00:11:25.329 "abort": false, 00:11:25.329 "seek_hole": false, 00:11:25.329 "seek_data": false, 00:11:25.329 "copy": false, 00:11:25.329 "nvme_iov_md": false 00:11:25.329 }, 00:11:25.329 "memory_domains": [ 00:11:25.329 { 00:11:25.329 "dma_device_id": "system", 00:11:25.329 "dma_device_type": 1 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.329 "dma_device_type": 2 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "system", 00:11:25.329 "dma_device_type": 1 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.329 "dma_device_type": 2 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "system", 00:11:25.329 "dma_device_type": 1 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.329 "dma_device_type": 2 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "system", 00:11:25.329 "dma_device_type": 1 00:11:25.329 }, 00:11:25.329 { 00:11:25.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.329 "dma_device_type": 2 00:11:25.329 } 00:11:25.329 ], 00:11:25.329 "driver_specific": { 00:11:25.329 "raid": { 00:11:25.329 "uuid": "a85da8fd-0559-4917-8425-eee4f42891e9", 00:11:25.329 "strip_size_kb": 64, 00:11:25.329 "state": "online", 00:11:25.329 "raid_level": "concat", 00:11:25.329 "superblock": false, 00:11:25.329 "num_base_bdevs": 4, 00:11:25.329 "num_base_bdevs_discovered": 4, 00:11:25.330 "num_base_bdevs_operational": 4, 00:11:25.330 "base_bdevs_list": [ 00:11:25.330 { 00:11:25.330 "name": "BaseBdev1", 00:11:25.330 "uuid": "fa14f2bb-9878-441b-9ded-f0938fa78882", 00:11:25.330 "is_configured": true, 00:11:25.330 "data_offset": 0, 00:11:25.330 "data_size": 65536 00:11:25.330 }, 00:11:25.330 { 00:11:25.330 "name": "BaseBdev2", 00:11:25.330 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:25.330 "is_configured": true, 00:11:25.330 "data_offset": 0, 00:11:25.330 "data_size": 65536 00:11:25.330 }, 00:11:25.330 { 00:11:25.330 "name": "BaseBdev3", 00:11:25.330 "uuid": "9948bf49-8f4b-49d9-9124-285a59c96abe", 00:11:25.330 "is_configured": true, 00:11:25.330 "data_offset": 0, 00:11:25.330 "data_size": 65536 00:11:25.330 }, 00:11:25.330 { 00:11:25.330 "name": "BaseBdev4", 00:11:25.330 "uuid": "8f34f279-5f1e-455a-a9ed-9d939f5350eb", 00:11:25.330 "is_configured": true, 00:11:25.330 "data_offset": 0, 00:11:25.330 "data_size": 65536 00:11:25.330 } 00:11:25.330 ] 00:11:25.330 } 00:11:25.330 } 00:11:25.330 }' 00:11:25.330 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:25.598 BaseBdev2 00:11:25.598 BaseBdev3 00:11:25.598 BaseBdev4' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.598 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.598 [2024-11-18 13:27:55.583975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.598 [2024-11-18 13:27:55.584012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.598 [2024-11-18 13:27:55.584065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.890 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.891 "name": "Existed_Raid", 00:11:25.891 "uuid": "a85da8fd-0559-4917-8425-eee4f42891e9", 00:11:25.891 "strip_size_kb": 64, 00:11:25.891 "state": "offline", 00:11:25.891 "raid_level": "concat", 00:11:25.891 "superblock": false, 00:11:25.891 "num_base_bdevs": 4, 00:11:25.891 "num_base_bdevs_discovered": 3, 00:11:25.891 "num_base_bdevs_operational": 3, 00:11:25.891 "base_bdevs_list": [ 00:11:25.891 { 00:11:25.891 "name": null, 00:11:25.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.891 "is_configured": false, 00:11:25.891 "data_offset": 0, 00:11:25.891 "data_size": 65536 00:11:25.891 }, 00:11:25.891 { 00:11:25.891 "name": "BaseBdev2", 00:11:25.891 "uuid": "a9cf05f8-186d-477d-914c-69b8e19b00e7", 00:11:25.891 "is_configured": true, 00:11:25.891 "data_offset": 0, 00:11:25.891 "data_size": 65536 00:11:25.891 }, 00:11:25.891 { 00:11:25.891 "name": "BaseBdev3", 00:11:25.891 "uuid": "9948bf49-8f4b-49d9-9124-285a59c96abe", 00:11:25.891 "is_configured": true, 00:11:25.891 "data_offset": 0, 00:11:25.891 "data_size": 65536 00:11:25.891 }, 00:11:25.891 { 00:11:25.891 "name": "BaseBdev4", 00:11:25.891 "uuid": "8f34f279-5f1e-455a-a9ed-9d939f5350eb", 00:11:25.891 "is_configured": true, 00:11:25.891 "data_offset": 0, 00:11:25.891 "data_size": 65536 00:11:25.891 } 00:11:25.891 ] 00:11:25.891 }' 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.891 13:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.151 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.151 [2024-11-18 13:27:56.155806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.411 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.412 [2024-11-18 13:27:56.313028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.412 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.672 [2024-11-18 13:27:56.469280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:26.672 [2024-11-18 13:27:56.469335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.672 BaseBdev2 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.672 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.672 [ 00:11:26.672 { 00:11:26.672 "name": "BaseBdev2", 00:11:26.672 "aliases": [ 00:11:26.672 "19d92a2c-156f-445e-b86d-59477f74ceb6" 00:11:26.672 ], 00:11:26.672 "product_name": "Malloc disk", 00:11:26.672 "block_size": 512, 00:11:26.672 "num_blocks": 65536, 00:11:26.672 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:26.672 "assigned_rate_limits": { 00:11:26.672 "rw_ios_per_sec": 0, 00:11:26.672 "rw_mbytes_per_sec": 0, 00:11:26.672 "r_mbytes_per_sec": 0, 00:11:26.672 "w_mbytes_per_sec": 0 00:11:26.672 }, 00:11:26.672 "claimed": false, 00:11:26.672 "zoned": false, 00:11:26.672 "supported_io_types": { 00:11:26.672 "read": true, 00:11:26.672 "write": true, 00:11:26.672 "unmap": true, 00:11:26.672 "flush": true, 00:11:26.672 "reset": true, 00:11:26.672 "nvme_admin": false, 00:11:26.672 "nvme_io": false, 00:11:26.672 "nvme_io_md": false, 00:11:26.672 "write_zeroes": true, 00:11:26.672 "zcopy": true, 00:11:26.672 "get_zone_info": false, 00:11:26.672 "zone_management": false, 00:11:26.672 "zone_append": false, 00:11:26.672 "compare": false, 00:11:26.672 "compare_and_write": false, 00:11:26.672 "abort": true, 00:11:26.672 "seek_hole": false, 00:11:26.672 "seek_data": false, 00:11:26.672 "copy": true, 00:11:26.672 "nvme_iov_md": false 00:11:26.672 }, 00:11:26.673 "memory_domains": [ 00:11:26.673 { 00:11:26.673 "dma_device_id": "system", 00:11:26.673 "dma_device_type": 1 00:11:26.673 }, 00:11:26.673 { 00:11:26.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.673 "dma_device_type": 2 00:11:26.673 } 00:11:26.673 ], 00:11:26.673 "driver_specific": {} 00:11:26.673 } 00:11:26.673 ] 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.673 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.932 BaseBdev3 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.932 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.932 [ 00:11:26.932 { 00:11:26.933 "name": "BaseBdev3", 00:11:26.933 "aliases": [ 00:11:26.933 "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b" 00:11:26.933 ], 00:11:26.933 "product_name": "Malloc disk", 00:11:26.933 "block_size": 512, 00:11:26.933 "num_blocks": 65536, 00:11:26.933 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:26.933 "assigned_rate_limits": { 00:11:26.933 "rw_ios_per_sec": 0, 00:11:26.933 "rw_mbytes_per_sec": 0, 00:11:26.933 "r_mbytes_per_sec": 0, 00:11:26.933 "w_mbytes_per_sec": 0 00:11:26.933 }, 00:11:26.933 "claimed": false, 00:11:26.933 "zoned": false, 00:11:26.933 "supported_io_types": { 00:11:26.933 "read": true, 00:11:26.933 "write": true, 00:11:26.933 "unmap": true, 00:11:26.933 "flush": true, 00:11:26.933 "reset": true, 00:11:26.933 "nvme_admin": false, 00:11:26.933 "nvme_io": false, 00:11:26.933 "nvme_io_md": false, 00:11:26.933 "write_zeroes": true, 00:11:26.933 "zcopy": true, 00:11:26.933 "get_zone_info": false, 00:11:26.933 "zone_management": false, 00:11:26.933 "zone_append": false, 00:11:26.933 "compare": false, 00:11:26.933 "compare_and_write": false, 00:11:26.933 "abort": true, 00:11:26.933 "seek_hole": false, 00:11:26.933 "seek_data": false, 00:11:26.933 "copy": true, 00:11:26.933 "nvme_iov_md": false 00:11:26.933 }, 00:11:26.933 "memory_domains": [ 00:11:26.933 { 00:11:26.933 "dma_device_id": "system", 00:11:26.933 "dma_device_type": 1 00:11:26.933 }, 00:11:26.933 { 00:11:26.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.933 "dma_device_type": 2 00:11:26.933 } 00:11:26.933 ], 00:11:26.933 "driver_specific": {} 00:11:26.933 } 00:11:26.933 ] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.933 BaseBdev4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.933 [ 00:11:26.933 { 00:11:26.933 "name": "BaseBdev4", 00:11:26.933 "aliases": [ 00:11:26.933 "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4" 00:11:26.933 ], 00:11:26.933 "product_name": "Malloc disk", 00:11:26.933 "block_size": 512, 00:11:26.933 "num_blocks": 65536, 00:11:26.933 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:26.933 "assigned_rate_limits": { 00:11:26.933 "rw_ios_per_sec": 0, 00:11:26.933 "rw_mbytes_per_sec": 0, 00:11:26.933 "r_mbytes_per_sec": 0, 00:11:26.933 "w_mbytes_per_sec": 0 00:11:26.933 }, 00:11:26.933 "claimed": false, 00:11:26.933 "zoned": false, 00:11:26.933 "supported_io_types": { 00:11:26.933 "read": true, 00:11:26.933 "write": true, 00:11:26.933 "unmap": true, 00:11:26.933 "flush": true, 00:11:26.933 "reset": true, 00:11:26.933 "nvme_admin": false, 00:11:26.933 "nvme_io": false, 00:11:26.933 "nvme_io_md": false, 00:11:26.933 "write_zeroes": true, 00:11:26.933 "zcopy": true, 00:11:26.933 "get_zone_info": false, 00:11:26.933 "zone_management": false, 00:11:26.933 "zone_append": false, 00:11:26.933 "compare": false, 00:11:26.933 "compare_and_write": false, 00:11:26.933 "abort": true, 00:11:26.933 "seek_hole": false, 00:11:26.933 "seek_data": false, 00:11:26.933 "copy": true, 00:11:26.933 "nvme_iov_md": false 00:11:26.933 }, 00:11:26.933 "memory_domains": [ 00:11:26.933 { 00:11:26.933 "dma_device_id": "system", 00:11:26.933 "dma_device_type": 1 00:11:26.933 }, 00:11:26.933 { 00:11:26.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.933 "dma_device_type": 2 00:11:26.933 } 00:11:26.933 ], 00:11:26.933 "driver_specific": {} 00:11:26.933 } 00:11:26.933 ] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.933 [2024-11-18 13:27:56.862538] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.933 [2024-11-18 13:27:56.862589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.933 [2024-11-18 13:27:56.862611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.933 [2024-11-18 13:27:56.864545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.933 [2024-11-18 13:27:56.864607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.933 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.933 "name": "Existed_Raid", 00:11:26.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.933 "strip_size_kb": 64, 00:11:26.933 "state": "configuring", 00:11:26.933 "raid_level": "concat", 00:11:26.933 "superblock": false, 00:11:26.933 "num_base_bdevs": 4, 00:11:26.933 "num_base_bdevs_discovered": 3, 00:11:26.933 "num_base_bdevs_operational": 4, 00:11:26.933 "base_bdevs_list": [ 00:11:26.933 { 00:11:26.933 "name": "BaseBdev1", 00:11:26.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.933 "is_configured": false, 00:11:26.933 "data_offset": 0, 00:11:26.933 "data_size": 0 00:11:26.933 }, 00:11:26.933 { 00:11:26.933 "name": "BaseBdev2", 00:11:26.933 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:26.933 "is_configured": true, 00:11:26.933 "data_offset": 0, 00:11:26.933 "data_size": 65536 00:11:26.933 }, 00:11:26.933 { 00:11:26.933 "name": "BaseBdev3", 00:11:26.933 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:26.933 "is_configured": true, 00:11:26.933 "data_offset": 0, 00:11:26.933 "data_size": 65536 00:11:26.933 }, 00:11:26.933 { 00:11:26.933 "name": "BaseBdev4", 00:11:26.933 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:26.933 "is_configured": true, 00:11:26.933 "data_offset": 0, 00:11:26.933 "data_size": 65536 00:11:26.933 } 00:11:26.933 ] 00:11:26.933 }' 00:11:26.934 13:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.934 13:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.500 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.501 [2024-11-18 13:27:57.301818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.501 "name": "Existed_Raid", 00:11:27.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.501 "strip_size_kb": 64, 00:11:27.501 "state": "configuring", 00:11:27.501 "raid_level": "concat", 00:11:27.501 "superblock": false, 00:11:27.501 "num_base_bdevs": 4, 00:11:27.501 "num_base_bdevs_discovered": 2, 00:11:27.501 "num_base_bdevs_operational": 4, 00:11:27.501 "base_bdevs_list": [ 00:11:27.501 { 00:11:27.501 "name": "BaseBdev1", 00:11:27.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.501 "is_configured": false, 00:11:27.501 "data_offset": 0, 00:11:27.501 "data_size": 0 00:11:27.501 }, 00:11:27.501 { 00:11:27.501 "name": null, 00:11:27.501 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:27.501 "is_configured": false, 00:11:27.501 "data_offset": 0, 00:11:27.501 "data_size": 65536 00:11:27.501 }, 00:11:27.501 { 00:11:27.501 "name": "BaseBdev3", 00:11:27.501 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:27.501 "is_configured": true, 00:11:27.501 "data_offset": 0, 00:11:27.501 "data_size": 65536 00:11:27.501 }, 00:11:27.501 { 00:11:27.501 "name": "BaseBdev4", 00:11:27.501 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:27.501 "is_configured": true, 00:11:27.501 "data_offset": 0, 00:11:27.501 "data_size": 65536 00:11:27.501 } 00:11:27.501 ] 00:11:27.501 }' 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.501 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 [2024-11-18 13:27:57.785835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.760 BaseBdev1 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.760 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.019 [ 00:11:28.019 { 00:11:28.019 "name": "BaseBdev1", 00:11:28.019 "aliases": [ 00:11:28.019 "b899cad9-9fa2-4b33-8561-ec10a43e5b56" 00:11:28.019 ], 00:11:28.019 "product_name": "Malloc disk", 00:11:28.019 "block_size": 512, 00:11:28.019 "num_blocks": 65536, 00:11:28.019 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:28.019 "assigned_rate_limits": { 00:11:28.019 "rw_ios_per_sec": 0, 00:11:28.019 "rw_mbytes_per_sec": 0, 00:11:28.019 "r_mbytes_per_sec": 0, 00:11:28.019 "w_mbytes_per_sec": 0 00:11:28.019 }, 00:11:28.019 "claimed": true, 00:11:28.020 "claim_type": "exclusive_write", 00:11:28.020 "zoned": false, 00:11:28.020 "supported_io_types": { 00:11:28.020 "read": true, 00:11:28.020 "write": true, 00:11:28.020 "unmap": true, 00:11:28.020 "flush": true, 00:11:28.020 "reset": true, 00:11:28.020 "nvme_admin": false, 00:11:28.020 "nvme_io": false, 00:11:28.020 "nvme_io_md": false, 00:11:28.020 "write_zeroes": true, 00:11:28.020 "zcopy": true, 00:11:28.020 "get_zone_info": false, 00:11:28.020 "zone_management": false, 00:11:28.020 "zone_append": false, 00:11:28.020 "compare": false, 00:11:28.020 "compare_and_write": false, 00:11:28.020 "abort": true, 00:11:28.020 "seek_hole": false, 00:11:28.020 "seek_data": false, 00:11:28.020 "copy": true, 00:11:28.020 "nvme_iov_md": false 00:11:28.020 }, 00:11:28.020 "memory_domains": [ 00:11:28.020 { 00:11:28.020 "dma_device_id": "system", 00:11:28.020 "dma_device_type": 1 00:11:28.020 }, 00:11:28.020 { 00:11:28.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.020 "dma_device_type": 2 00:11:28.020 } 00:11:28.020 ], 00:11:28.020 "driver_specific": {} 00:11:28.020 } 00:11:28.020 ] 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.020 "name": "Existed_Raid", 00:11:28.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.020 "strip_size_kb": 64, 00:11:28.020 "state": "configuring", 00:11:28.020 "raid_level": "concat", 00:11:28.020 "superblock": false, 00:11:28.020 "num_base_bdevs": 4, 00:11:28.020 "num_base_bdevs_discovered": 3, 00:11:28.020 "num_base_bdevs_operational": 4, 00:11:28.020 "base_bdevs_list": [ 00:11:28.020 { 00:11:28.020 "name": "BaseBdev1", 00:11:28.020 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:28.020 "is_configured": true, 00:11:28.020 "data_offset": 0, 00:11:28.020 "data_size": 65536 00:11:28.020 }, 00:11:28.020 { 00:11:28.020 "name": null, 00:11:28.020 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:28.020 "is_configured": false, 00:11:28.020 "data_offset": 0, 00:11:28.020 "data_size": 65536 00:11:28.020 }, 00:11:28.020 { 00:11:28.020 "name": "BaseBdev3", 00:11:28.020 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:28.020 "is_configured": true, 00:11:28.020 "data_offset": 0, 00:11:28.020 "data_size": 65536 00:11:28.020 }, 00:11:28.020 { 00:11:28.020 "name": "BaseBdev4", 00:11:28.020 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:28.020 "is_configured": true, 00:11:28.020 "data_offset": 0, 00:11:28.020 "data_size": 65536 00:11:28.020 } 00:11:28.020 ] 00:11:28.020 }' 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.020 13:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.279 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 [2024-11-18 13:27:58.329099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.538 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.538 "name": "Existed_Raid", 00:11:28.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.538 "strip_size_kb": 64, 00:11:28.538 "state": "configuring", 00:11:28.538 "raid_level": "concat", 00:11:28.538 "superblock": false, 00:11:28.538 "num_base_bdevs": 4, 00:11:28.538 "num_base_bdevs_discovered": 2, 00:11:28.538 "num_base_bdevs_operational": 4, 00:11:28.538 "base_bdevs_list": [ 00:11:28.538 { 00:11:28.538 "name": "BaseBdev1", 00:11:28.538 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:28.538 "is_configured": true, 00:11:28.538 "data_offset": 0, 00:11:28.538 "data_size": 65536 00:11:28.538 }, 00:11:28.538 { 00:11:28.538 "name": null, 00:11:28.538 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:28.538 "is_configured": false, 00:11:28.538 "data_offset": 0, 00:11:28.538 "data_size": 65536 00:11:28.538 }, 00:11:28.538 { 00:11:28.538 "name": null, 00:11:28.538 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:28.538 "is_configured": false, 00:11:28.538 "data_offset": 0, 00:11:28.538 "data_size": 65536 00:11:28.538 }, 00:11:28.538 { 00:11:28.538 "name": "BaseBdev4", 00:11:28.538 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:28.538 "is_configured": true, 00:11:28.538 "data_offset": 0, 00:11:28.539 "data_size": 65536 00:11:28.539 } 00:11:28.539 ] 00:11:28.539 }' 00:11:28.539 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.539 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 [2024-11-18 13:27:58.800296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.798 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.798 "name": "Existed_Raid", 00:11:28.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.798 "strip_size_kb": 64, 00:11:28.798 "state": "configuring", 00:11:28.798 "raid_level": "concat", 00:11:28.798 "superblock": false, 00:11:28.798 "num_base_bdevs": 4, 00:11:28.798 "num_base_bdevs_discovered": 3, 00:11:28.798 "num_base_bdevs_operational": 4, 00:11:28.798 "base_bdevs_list": [ 00:11:28.798 { 00:11:28.798 "name": "BaseBdev1", 00:11:28.798 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:28.798 "is_configured": true, 00:11:28.798 "data_offset": 0, 00:11:28.798 "data_size": 65536 00:11:28.798 }, 00:11:28.798 { 00:11:28.798 "name": null, 00:11:28.798 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:28.798 "is_configured": false, 00:11:28.798 "data_offset": 0, 00:11:28.798 "data_size": 65536 00:11:28.798 }, 00:11:28.798 { 00:11:28.798 "name": "BaseBdev3", 00:11:28.799 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:28.799 "is_configured": true, 00:11:28.799 "data_offset": 0, 00:11:28.799 "data_size": 65536 00:11:28.799 }, 00:11:28.799 { 00:11:28.799 "name": "BaseBdev4", 00:11:28.799 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:28.799 "is_configured": true, 00:11:28.799 "data_offset": 0, 00:11:28.799 "data_size": 65536 00:11:28.799 } 00:11:28.799 ] 00:11:28.799 }' 00:11:28.799 13:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.799 13:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 [2024-11-18 13:27:59.287497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.366 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.625 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.625 "name": "Existed_Raid", 00:11:29.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.625 "strip_size_kb": 64, 00:11:29.625 "state": "configuring", 00:11:29.625 "raid_level": "concat", 00:11:29.625 "superblock": false, 00:11:29.625 "num_base_bdevs": 4, 00:11:29.625 "num_base_bdevs_discovered": 2, 00:11:29.625 "num_base_bdevs_operational": 4, 00:11:29.625 "base_bdevs_list": [ 00:11:29.625 { 00:11:29.625 "name": null, 00:11:29.625 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:29.625 "is_configured": false, 00:11:29.625 "data_offset": 0, 00:11:29.625 "data_size": 65536 00:11:29.625 }, 00:11:29.625 { 00:11:29.625 "name": null, 00:11:29.625 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:29.625 "is_configured": false, 00:11:29.625 "data_offset": 0, 00:11:29.625 "data_size": 65536 00:11:29.625 }, 00:11:29.625 { 00:11:29.625 "name": "BaseBdev3", 00:11:29.625 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:29.625 "is_configured": true, 00:11:29.625 "data_offset": 0, 00:11:29.625 "data_size": 65536 00:11:29.625 }, 00:11:29.625 { 00:11:29.625 "name": "BaseBdev4", 00:11:29.625 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:29.625 "is_configured": true, 00:11:29.625 "data_offset": 0, 00:11:29.625 "data_size": 65536 00:11:29.625 } 00:11:29.625 ] 00:11:29.625 }' 00:11:29.625 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.625 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 [2024-11-18 13:27:59.886914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.885 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.144 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.144 "name": "Existed_Raid", 00:11:30.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.144 "strip_size_kb": 64, 00:11:30.144 "state": "configuring", 00:11:30.144 "raid_level": "concat", 00:11:30.144 "superblock": false, 00:11:30.144 "num_base_bdevs": 4, 00:11:30.144 "num_base_bdevs_discovered": 3, 00:11:30.144 "num_base_bdevs_operational": 4, 00:11:30.144 "base_bdevs_list": [ 00:11:30.144 { 00:11:30.144 "name": null, 00:11:30.144 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:30.144 "is_configured": false, 00:11:30.144 "data_offset": 0, 00:11:30.144 "data_size": 65536 00:11:30.144 }, 00:11:30.144 { 00:11:30.144 "name": "BaseBdev2", 00:11:30.144 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:30.144 "is_configured": true, 00:11:30.144 "data_offset": 0, 00:11:30.144 "data_size": 65536 00:11:30.144 }, 00:11:30.144 { 00:11:30.144 "name": "BaseBdev3", 00:11:30.144 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:30.144 "is_configured": true, 00:11:30.144 "data_offset": 0, 00:11:30.144 "data_size": 65536 00:11:30.144 }, 00:11:30.144 { 00:11:30.144 "name": "BaseBdev4", 00:11:30.144 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:30.144 "is_configured": true, 00:11:30.144 "data_offset": 0, 00:11:30.144 "data_size": 65536 00:11:30.144 } 00:11:30.144 ] 00:11:30.144 }' 00:11:30.144 13:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.144 13:27:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.403 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.403 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.403 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.403 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.403 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b899cad9-9fa2-4b33-8561-ec10a43e5b56 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.404 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.404 [2024-11-18 13:28:00.451968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.404 [2024-11-18 13:28:00.452034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.404 [2024-11-18 13:28:00.452042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:30.404 [2024-11-18 13:28:00.452338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.404 [2024-11-18 13:28:00.452490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.404 [2024-11-18 13:28:00.452509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.404 [2024-11-18 13:28:00.452754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.663 NewBaseBdev 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.663 [ 00:11:30.663 { 00:11:30.663 "name": "NewBaseBdev", 00:11:30.663 "aliases": [ 00:11:30.663 "b899cad9-9fa2-4b33-8561-ec10a43e5b56" 00:11:30.663 ], 00:11:30.663 "product_name": "Malloc disk", 00:11:30.663 "block_size": 512, 00:11:30.663 "num_blocks": 65536, 00:11:30.663 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:30.663 "assigned_rate_limits": { 00:11:30.663 "rw_ios_per_sec": 0, 00:11:30.663 "rw_mbytes_per_sec": 0, 00:11:30.663 "r_mbytes_per_sec": 0, 00:11:30.663 "w_mbytes_per_sec": 0 00:11:30.663 }, 00:11:30.663 "claimed": true, 00:11:30.663 "claim_type": "exclusive_write", 00:11:30.663 "zoned": false, 00:11:30.663 "supported_io_types": { 00:11:30.663 "read": true, 00:11:30.663 "write": true, 00:11:30.663 "unmap": true, 00:11:30.663 "flush": true, 00:11:30.663 "reset": true, 00:11:30.663 "nvme_admin": false, 00:11:30.663 "nvme_io": false, 00:11:30.663 "nvme_io_md": false, 00:11:30.663 "write_zeroes": true, 00:11:30.663 "zcopy": true, 00:11:30.663 "get_zone_info": false, 00:11:30.663 "zone_management": false, 00:11:30.663 "zone_append": false, 00:11:30.663 "compare": false, 00:11:30.663 "compare_and_write": false, 00:11:30.663 "abort": true, 00:11:30.663 "seek_hole": false, 00:11:30.663 "seek_data": false, 00:11:30.663 "copy": true, 00:11:30.663 "nvme_iov_md": false 00:11:30.663 }, 00:11:30.663 "memory_domains": [ 00:11:30.663 { 00:11:30.663 "dma_device_id": "system", 00:11:30.663 "dma_device_type": 1 00:11:30.663 }, 00:11:30.663 { 00:11:30.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.663 "dma_device_type": 2 00:11:30.663 } 00:11:30.663 ], 00:11:30.663 "driver_specific": {} 00:11:30.663 } 00:11:30.663 ] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.663 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.663 "name": "Existed_Raid", 00:11:30.663 "uuid": "907f06e3-719e-416f-82e4-8ee789ac23e2", 00:11:30.663 "strip_size_kb": 64, 00:11:30.663 "state": "online", 00:11:30.663 "raid_level": "concat", 00:11:30.663 "superblock": false, 00:11:30.663 "num_base_bdevs": 4, 00:11:30.663 "num_base_bdevs_discovered": 4, 00:11:30.663 "num_base_bdevs_operational": 4, 00:11:30.663 "base_bdevs_list": [ 00:11:30.663 { 00:11:30.663 "name": "NewBaseBdev", 00:11:30.663 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:30.663 "is_configured": true, 00:11:30.663 "data_offset": 0, 00:11:30.663 "data_size": 65536 00:11:30.663 }, 00:11:30.663 { 00:11:30.663 "name": "BaseBdev2", 00:11:30.663 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:30.663 "is_configured": true, 00:11:30.663 "data_offset": 0, 00:11:30.663 "data_size": 65536 00:11:30.663 }, 00:11:30.663 { 00:11:30.663 "name": "BaseBdev3", 00:11:30.663 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:30.663 "is_configured": true, 00:11:30.663 "data_offset": 0, 00:11:30.663 "data_size": 65536 00:11:30.663 }, 00:11:30.663 { 00:11:30.663 "name": "BaseBdev4", 00:11:30.663 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:30.664 "is_configured": true, 00:11:30.664 "data_offset": 0, 00:11:30.664 "data_size": 65536 00:11:30.664 } 00:11:30.664 ] 00:11:30.664 }' 00:11:30.664 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.664 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.927 [2024-11-18 13:28:00.927634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.927 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.927 "name": "Existed_Raid", 00:11:30.927 "aliases": [ 00:11:30.927 "907f06e3-719e-416f-82e4-8ee789ac23e2" 00:11:30.927 ], 00:11:30.927 "product_name": "Raid Volume", 00:11:30.927 "block_size": 512, 00:11:30.927 "num_blocks": 262144, 00:11:30.927 "uuid": "907f06e3-719e-416f-82e4-8ee789ac23e2", 00:11:30.927 "assigned_rate_limits": { 00:11:30.927 "rw_ios_per_sec": 0, 00:11:30.927 "rw_mbytes_per_sec": 0, 00:11:30.927 "r_mbytes_per_sec": 0, 00:11:30.927 "w_mbytes_per_sec": 0 00:11:30.927 }, 00:11:30.927 "claimed": false, 00:11:30.927 "zoned": false, 00:11:30.927 "supported_io_types": { 00:11:30.927 "read": true, 00:11:30.927 "write": true, 00:11:30.927 "unmap": true, 00:11:30.927 "flush": true, 00:11:30.927 "reset": true, 00:11:30.927 "nvme_admin": false, 00:11:30.927 "nvme_io": false, 00:11:30.927 "nvme_io_md": false, 00:11:30.927 "write_zeroes": true, 00:11:30.927 "zcopy": false, 00:11:30.927 "get_zone_info": false, 00:11:30.927 "zone_management": false, 00:11:30.927 "zone_append": false, 00:11:30.927 "compare": false, 00:11:30.927 "compare_and_write": false, 00:11:30.927 "abort": false, 00:11:30.927 "seek_hole": false, 00:11:30.927 "seek_data": false, 00:11:30.927 "copy": false, 00:11:30.927 "nvme_iov_md": false 00:11:30.927 }, 00:11:30.927 "memory_domains": [ 00:11:30.927 { 00:11:30.927 "dma_device_id": "system", 00:11:30.927 "dma_device_type": 1 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.927 "dma_device_type": 2 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "system", 00:11:30.927 "dma_device_type": 1 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.927 "dma_device_type": 2 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "system", 00:11:30.927 "dma_device_type": 1 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.927 "dma_device_type": 2 00:11:30.927 }, 00:11:30.927 { 00:11:30.927 "dma_device_id": "system", 00:11:30.927 "dma_device_type": 1 00:11:30.927 }, 00:11:30.927 { 00:11:30.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.928 "dma_device_type": 2 00:11:30.928 } 00:11:30.928 ], 00:11:30.928 "driver_specific": { 00:11:30.928 "raid": { 00:11:30.928 "uuid": "907f06e3-719e-416f-82e4-8ee789ac23e2", 00:11:30.928 "strip_size_kb": 64, 00:11:30.928 "state": "online", 00:11:30.928 "raid_level": "concat", 00:11:30.928 "superblock": false, 00:11:30.928 "num_base_bdevs": 4, 00:11:30.928 "num_base_bdevs_discovered": 4, 00:11:30.928 "num_base_bdevs_operational": 4, 00:11:30.928 "base_bdevs_list": [ 00:11:30.928 { 00:11:30.928 "name": "NewBaseBdev", 00:11:30.928 "uuid": "b899cad9-9fa2-4b33-8561-ec10a43e5b56", 00:11:30.928 "is_configured": true, 00:11:30.928 "data_offset": 0, 00:11:30.928 "data_size": 65536 00:11:30.928 }, 00:11:30.928 { 00:11:30.928 "name": "BaseBdev2", 00:11:30.928 "uuid": "19d92a2c-156f-445e-b86d-59477f74ceb6", 00:11:30.928 "is_configured": true, 00:11:30.928 "data_offset": 0, 00:11:30.928 "data_size": 65536 00:11:30.928 }, 00:11:30.928 { 00:11:30.928 "name": "BaseBdev3", 00:11:30.928 "uuid": "5502dbc5-8aa9-4d39-a528-8aaec6e4c64b", 00:11:30.928 "is_configured": true, 00:11:30.928 "data_offset": 0, 00:11:30.928 "data_size": 65536 00:11:30.928 }, 00:11:30.928 { 00:11:30.928 "name": "BaseBdev4", 00:11:30.928 "uuid": "6bc38b75-a670-4c4d-a4b8-ef21e3be12c4", 00:11:30.928 "is_configured": true, 00:11:30.928 "data_offset": 0, 00:11:30.928 "data_size": 65536 00:11:30.928 } 00:11:30.928 ] 00:11:30.928 } 00:11:30.928 } 00:11:30.928 }' 00:11:30.928 13:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:31.207 BaseBdev2 00:11:31.207 BaseBdev3 00:11:31.207 BaseBdev4' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.207 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.208 [2024-11-18 13:28:01.234785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.208 [2024-11-18 13:28:01.234834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.208 [2024-11-18 13:28:01.234925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.208 [2024-11-18 13:28:01.235008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.208 [2024-11-18 13:28:01.235027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71282 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71282 ']' 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71282 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.208 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71282 00:11:31.467 killing process with pid 71282 00:11:31.467 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.467 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.467 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71282' 00:11:31.467 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71282 00:11:31.467 [2024-11-18 13:28:01.268894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.467 13:28:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71282 00:11:31.726 [2024-11-18 13:28:01.665846] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.102 13:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:33.102 00:11:33.102 real 0m11.446s 00:11:33.102 user 0m18.116s 00:11:33.102 sys 0m2.080s 00:11:33.102 13:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.102 13:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.102 ************************************ 00:11:33.102 END TEST raid_state_function_test 00:11:33.102 ************************************ 00:11:33.102 13:28:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:33.102 13:28:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:33.102 13:28:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.102 13:28:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.102 ************************************ 00:11:33.102 START TEST raid_state_function_test_sb 00:11:33.103 ************************************ 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71958 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71958' 00:11:33.103 Process raid pid: 71958 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71958 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71958 ']' 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.103 13:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.103 [2024-11-18 13:28:02.934753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:33.103 [2024-11-18 13:28:02.934894] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.103 [2024-11-18 13:28:03.098783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.362 [2024-11-18 13:28:03.212045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.621 [2024-11-18 13:28:03.414458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.621 [2024-11-18 13:28:03.414502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.880 [2024-11-18 13:28:03.771801] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.880 [2024-11-18 13:28:03.771859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.880 [2024-11-18 13:28:03.771869] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.880 [2024-11-18 13:28:03.771878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.880 [2024-11-18 13:28:03.771884] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.880 [2024-11-18 13:28:03.771894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.880 [2024-11-18 13:28:03.771900] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.880 [2024-11-18 13:28:03.771908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.880 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.881 "name": "Existed_Raid", 00:11:33.881 "uuid": "0b028187-7121-41ba-a48f-ac2dcc2e409b", 00:11:33.881 "strip_size_kb": 64, 00:11:33.881 "state": "configuring", 00:11:33.881 "raid_level": "concat", 00:11:33.881 "superblock": true, 00:11:33.881 "num_base_bdevs": 4, 00:11:33.881 "num_base_bdevs_discovered": 0, 00:11:33.881 "num_base_bdevs_operational": 4, 00:11:33.881 "base_bdevs_list": [ 00:11:33.881 { 00:11:33.881 "name": "BaseBdev1", 00:11:33.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.881 "is_configured": false, 00:11:33.881 "data_offset": 0, 00:11:33.881 "data_size": 0 00:11:33.881 }, 00:11:33.881 { 00:11:33.881 "name": "BaseBdev2", 00:11:33.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.881 "is_configured": false, 00:11:33.881 "data_offset": 0, 00:11:33.881 "data_size": 0 00:11:33.881 }, 00:11:33.881 { 00:11:33.881 "name": "BaseBdev3", 00:11:33.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.881 "is_configured": false, 00:11:33.881 "data_offset": 0, 00:11:33.881 "data_size": 0 00:11:33.881 }, 00:11:33.881 { 00:11:33.881 "name": "BaseBdev4", 00:11:33.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.881 "is_configured": false, 00:11:33.881 "data_offset": 0, 00:11:33.881 "data_size": 0 00:11:33.881 } 00:11:33.881 ] 00:11:33.881 }' 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.881 13:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.450 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 [2024-11-18 13:28:04.238939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.451 [2024-11-18 13:28:04.238988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 [2024-11-18 13:28:04.250902] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.451 [2024-11-18 13:28:04.250952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.451 [2024-11-18 13:28:04.250961] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.451 [2024-11-18 13:28:04.250970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.451 [2024-11-18 13:28:04.250978] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.451 [2024-11-18 13:28:04.250988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.451 [2024-11-18 13:28:04.250994] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:34.451 [2024-11-18 13:28:04.251004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 [2024-11-18 13:28:04.298800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.451 BaseBdev1 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 [ 00:11:34.451 { 00:11:34.451 "name": "BaseBdev1", 00:11:34.451 "aliases": [ 00:11:34.451 "c9b43e31-58bd-4de1-bd69-f90d00cf685f" 00:11:34.451 ], 00:11:34.451 "product_name": "Malloc disk", 00:11:34.451 "block_size": 512, 00:11:34.451 "num_blocks": 65536, 00:11:34.451 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:34.451 "assigned_rate_limits": { 00:11:34.451 "rw_ios_per_sec": 0, 00:11:34.451 "rw_mbytes_per_sec": 0, 00:11:34.451 "r_mbytes_per_sec": 0, 00:11:34.451 "w_mbytes_per_sec": 0 00:11:34.451 }, 00:11:34.451 "claimed": true, 00:11:34.451 "claim_type": "exclusive_write", 00:11:34.451 "zoned": false, 00:11:34.451 "supported_io_types": { 00:11:34.451 "read": true, 00:11:34.451 "write": true, 00:11:34.451 "unmap": true, 00:11:34.451 "flush": true, 00:11:34.451 "reset": true, 00:11:34.451 "nvme_admin": false, 00:11:34.451 "nvme_io": false, 00:11:34.451 "nvme_io_md": false, 00:11:34.451 "write_zeroes": true, 00:11:34.451 "zcopy": true, 00:11:34.451 "get_zone_info": false, 00:11:34.451 "zone_management": false, 00:11:34.451 "zone_append": false, 00:11:34.451 "compare": false, 00:11:34.451 "compare_and_write": false, 00:11:34.451 "abort": true, 00:11:34.451 "seek_hole": false, 00:11:34.451 "seek_data": false, 00:11:34.451 "copy": true, 00:11:34.451 "nvme_iov_md": false 00:11:34.451 }, 00:11:34.451 "memory_domains": [ 00:11:34.451 { 00:11:34.451 "dma_device_id": "system", 00:11:34.451 "dma_device_type": 1 00:11:34.451 }, 00:11:34.451 { 00:11:34.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.451 "dma_device_type": 2 00:11:34.451 } 00:11:34.451 ], 00:11:34.451 "driver_specific": {} 00:11:34.451 } 00:11:34.451 ] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.451 "name": "Existed_Raid", 00:11:34.451 "uuid": "14494194-59f6-47f9-af96-57632658c7cd", 00:11:34.451 "strip_size_kb": 64, 00:11:34.451 "state": "configuring", 00:11:34.451 "raid_level": "concat", 00:11:34.451 "superblock": true, 00:11:34.451 "num_base_bdevs": 4, 00:11:34.451 "num_base_bdevs_discovered": 1, 00:11:34.451 "num_base_bdevs_operational": 4, 00:11:34.451 "base_bdevs_list": [ 00:11:34.451 { 00:11:34.451 "name": "BaseBdev1", 00:11:34.451 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:34.451 "is_configured": true, 00:11:34.451 "data_offset": 2048, 00:11:34.451 "data_size": 63488 00:11:34.451 }, 00:11:34.451 { 00:11:34.451 "name": "BaseBdev2", 00:11:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.451 "is_configured": false, 00:11:34.451 "data_offset": 0, 00:11:34.451 "data_size": 0 00:11:34.451 }, 00:11:34.451 { 00:11:34.451 "name": "BaseBdev3", 00:11:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.451 "is_configured": false, 00:11:34.451 "data_offset": 0, 00:11:34.451 "data_size": 0 00:11:34.451 }, 00:11:34.451 { 00:11:34.451 "name": "BaseBdev4", 00:11:34.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.451 "is_configured": false, 00:11:34.451 "data_offset": 0, 00:11:34.451 "data_size": 0 00:11:34.451 } 00:11:34.451 ] 00:11:34.451 }' 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.451 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.022 [2024-11-18 13:28:04.770066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.022 [2024-11-18 13:28:04.770143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.022 [2024-11-18 13:28:04.778102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.022 [2024-11-18 13:28:04.779970] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.022 [2024-11-18 13:28:04.780014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.022 [2024-11-18 13:28:04.780024] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.022 [2024-11-18 13:28:04.780034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.022 [2024-11-18 13:28:04.780041] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:35.022 [2024-11-18 13:28:04.780049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.022 "name": "Existed_Raid", 00:11:35.022 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:35.022 "strip_size_kb": 64, 00:11:35.022 "state": "configuring", 00:11:35.022 "raid_level": "concat", 00:11:35.022 "superblock": true, 00:11:35.022 "num_base_bdevs": 4, 00:11:35.022 "num_base_bdevs_discovered": 1, 00:11:35.022 "num_base_bdevs_operational": 4, 00:11:35.022 "base_bdevs_list": [ 00:11:35.022 { 00:11:35.022 "name": "BaseBdev1", 00:11:35.022 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:35.022 "is_configured": true, 00:11:35.022 "data_offset": 2048, 00:11:35.022 "data_size": 63488 00:11:35.022 }, 00:11:35.022 { 00:11:35.022 "name": "BaseBdev2", 00:11:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.022 "is_configured": false, 00:11:35.022 "data_offset": 0, 00:11:35.022 "data_size": 0 00:11:35.022 }, 00:11:35.022 { 00:11:35.022 "name": "BaseBdev3", 00:11:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.022 "is_configured": false, 00:11:35.022 "data_offset": 0, 00:11:35.022 "data_size": 0 00:11:35.022 }, 00:11:35.022 { 00:11:35.022 "name": "BaseBdev4", 00:11:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.022 "is_configured": false, 00:11:35.022 "data_offset": 0, 00:11:35.022 "data_size": 0 00:11:35.022 } 00:11:35.022 ] 00:11:35.022 }' 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.022 13:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.283 [2024-11-18 13:28:05.274741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.283 BaseBdev2 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.283 [ 00:11:35.283 { 00:11:35.283 "name": "BaseBdev2", 00:11:35.283 "aliases": [ 00:11:35.283 "c6b9707a-5689-4861-ad68-302a6404de8d" 00:11:35.283 ], 00:11:35.283 "product_name": "Malloc disk", 00:11:35.283 "block_size": 512, 00:11:35.283 "num_blocks": 65536, 00:11:35.283 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:35.283 "assigned_rate_limits": { 00:11:35.283 "rw_ios_per_sec": 0, 00:11:35.283 "rw_mbytes_per_sec": 0, 00:11:35.283 "r_mbytes_per_sec": 0, 00:11:35.283 "w_mbytes_per_sec": 0 00:11:35.283 }, 00:11:35.283 "claimed": true, 00:11:35.283 "claim_type": "exclusive_write", 00:11:35.283 "zoned": false, 00:11:35.283 "supported_io_types": { 00:11:35.283 "read": true, 00:11:35.283 "write": true, 00:11:35.283 "unmap": true, 00:11:35.283 "flush": true, 00:11:35.283 "reset": true, 00:11:35.283 "nvme_admin": false, 00:11:35.283 "nvme_io": false, 00:11:35.283 "nvme_io_md": false, 00:11:35.283 "write_zeroes": true, 00:11:35.283 "zcopy": true, 00:11:35.283 "get_zone_info": false, 00:11:35.283 "zone_management": false, 00:11:35.283 "zone_append": false, 00:11:35.283 "compare": false, 00:11:35.283 "compare_and_write": false, 00:11:35.283 "abort": true, 00:11:35.283 "seek_hole": false, 00:11:35.283 "seek_data": false, 00:11:35.283 "copy": true, 00:11:35.283 "nvme_iov_md": false 00:11:35.283 }, 00:11:35.283 "memory_domains": [ 00:11:35.283 { 00:11:35.283 "dma_device_id": "system", 00:11:35.283 "dma_device_type": 1 00:11:35.283 }, 00:11:35.283 { 00:11:35.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.283 "dma_device_type": 2 00:11:35.283 } 00:11:35.283 ], 00:11:35.283 "driver_specific": {} 00:11:35.283 } 00:11:35.283 ] 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.283 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.544 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.544 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.544 "name": "Existed_Raid", 00:11:35.544 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:35.544 "strip_size_kb": 64, 00:11:35.544 "state": "configuring", 00:11:35.544 "raid_level": "concat", 00:11:35.544 "superblock": true, 00:11:35.544 "num_base_bdevs": 4, 00:11:35.544 "num_base_bdevs_discovered": 2, 00:11:35.544 "num_base_bdevs_operational": 4, 00:11:35.544 "base_bdevs_list": [ 00:11:35.544 { 00:11:35.544 "name": "BaseBdev1", 00:11:35.544 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:35.544 "is_configured": true, 00:11:35.544 "data_offset": 2048, 00:11:35.544 "data_size": 63488 00:11:35.544 }, 00:11:35.544 { 00:11:35.544 "name": "BaseBdev2", 00:11:35.544 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:35.544 "is_configured": true, 00:11:35.544 "data_offset": 2048, 00:11:35.544 "data_size": 63488 00:11:35.544 }, 00:11:35.544 { 00:11:35.544 "name": "BaseBdev3", 00:11:35.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.544 "is_configured": false, 00:11:35.544 "data_offset": 0, 00:11:35.544 "data_size": 0 00:11:35.544 }, 00:11:35.544 { 00:11:35.544 "name": "BaseBdev4", 00:11:35.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.544 "is_configured": false, 00:11:35.544 "data_offset": 0, 00:11:35.544 "data_size": 0 00:11:35.544 } 00:11:35.544 ] 00:11:35.544 }' 00:11:35.544 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.544 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 [2024-11-18 13:28:05.823027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.804 BaseBdev3 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.804 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.804 [ 00:11:35.804 { 00:11:35.804 "name": "BaseBdev3", 00:11:35.804 "aliases": [ 00:11:35.804 "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123" 00:11:35.804 ], 00:11:35.804 "product_name": "Malloc disk", 00:11:35.804 "block_size": 512, 00:11:35.804 "num_blocks": 65536, 00:11:35.804 "uuid": "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123", 00:11:35.804 "assigned_rate_limits": { 00:11:35.804 "rw_ios_per_sec": 0, 00:11:35.804 "rw_mbytes_per_sec": 0, 00:11:35.804 "r_mbytes_per_sec": 0, 00:11:35.804 "w_mbytes_per_sec": 0 00:11:35.804 }, 00:11:35.804 "claimed": true, 00:11:35.804 "claim_type": "exclusive_write", 00:11:35.804 "zoned": false, 00:11:35.804 "supported_io_types": { 00:11:35.804 "read": true, 00:11:35.804 "write": true, 00:11:35.804 "unmap": true, 00:11:35.804 "flush": true, 00:11:35.804 "reset": true, 00:11:35.804 "nvme_admin": false, 00:11:35.804 "nvme_io": false, 00:11:35.804 "nvme_io_md": false, 00:11:35.804 "write_zeroes": true, 00:11:35.804 "zcopy": true, 00:11:35.804 "get_zone_info": false, 00:11:35.804 "zone_management": false, 00:11:35.804 "zone_append": false, 00:11:35.804 "compare": false, 00:11:35.804 "compare_and_write": false, 00:11:35.804 "abort": true, 00:11:35.804 "seek_hole": false, 00:11:35.804 "seek_data": false, 00:11:35.804 "copy": true, 00:11:36.064 "nvme_iov_md": false 00:11:36.064 }, 00:11:36.064 "memory_domains": [ 00:11:36.064 { 00:11:36.064 "dma_device_id": "system", 00:11:36.064 "dma_device_type": 1 00:11:36.064 }, 00:11:36.064 { 00:11:36.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.064 "dma_device_type": 2 00:11:36.064 } 00:11:36.064 ], 00:11:36.064 "driver_specific": {} 00:11:36.064 } 00:11:36.064 ] 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.064 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.064 "name": "Existed_Raid", 00:11:36.064 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:36.064 "strip_size_kb": 64, 00:11:36.064 "state": "configuring", 00:11:36.064 "raid_level": "concat", 00:11:36.064 "superblock": true, 00:11:36.064 "num_base_bdevs": 4, 00:11:36.064 "num_base_bdevs_discovered": 3, 00:11:36.064 "num_base_bdevs_operational": 4, 00:11:36.064 "base_bdevs_list": [ 00:11:36.064 { 00:11:36.064 "name": "BaseBdev1", 00:11:36.064 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:36.064 "is_configured": true, 00:11:36.064 "data_offset": 2048, 00:11:36.064 "data_size": 63488 00:11:36.064 }, 00:11:36.064 { 00:11:36.064 "name": "BaseBdev2", 00:11:36.064 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:36.064 "is_configured": true, 00:11:36.064 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 }, 00:11:36.065 { 00:11:36.065 "name": "BaseBdev3", 00:11:36.065 "uuid": "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123", 00:11:36.065 "is_configured": true, 00:11:36.065 "data_offset": 2048, 00:11:36.065 "data_size": 63488 00:11:36.065 }, 00:11:36.065 { 00:11:36.065 "name": "BaseBdev4", 00:11:36.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.065 "is_configured": false, 00:11:36.065 "data_offset": 0, 00:11:36.065 "data_size": 0 00:11:36.065 } 00:11:36.065 ] 00:11:36.065 }' 00:11:36.065 13:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.065 13:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [2024-11-18 13:28:06.303876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.325 [2024-11-18 13:28:06.304162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.325 [2024-11-18 13:28:06.304178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.325 [2024-11-18 13:28:06.304446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.325 BaseBdev4 00:11:36.325 [2024-11-18 13:28:06.304599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.325 [2024-11-18 13:28:06.304612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.325 [2024-11-18 13:28:06.304750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.325 [ 00:11:36.325 { 00:11:36.325 "name": "BaseBdev4", 00:11:36.325 "aliases": [ 00:11:36.325 "0b15cbb0-1bb7-4001-89ae-0c53e4e13c14" 00:11:36.325 ], 00:11:36.325 "product_name": "Malloc disk", 00:11:36.325 "block_size": 512, 00:11:36.325 "num_blocks": 65536, 00:11:36.325 "uuid": "0b15cbb0-1bb7-4001-89ae-0c53e4e13c14", 00:11:36.325 "assigned_rate_limits": { 00:11:36.325 "rw_ios_per_sec": 0, 00:11:36.325 "rw_mbytes_per_sec": 0, 00:11:36.325 "r_mbytes_per_sec": 0, 00:11:36.325 "w_mbytes_per_sec": 0 00:11:36.325 }, 00:11:36.325 "claimed": true, 00:11:36.325 "claim_type": "exclusive_write", 00:11:36.325 "zoned": false, 00:11:36.325 "supported_io_types": { 00:11:36.325 "read": true, 00:11:36.325 "write": true, 00:11:36.325 "unmap": true, 00:11:36.325 "flush": true, 00:11:36.325 "reset": true, 00:11:36.325 "nvme_admin": false, 00:11:36.325 "nvme_io": false, 00:11:36.325 "nvme_io_md": false, 00:11:36.325 "write_zeroes": true, 00:11:36.325 "zcopy": true, 00:11:36.325 "get_zone_info": false, 00:11:36.325 "zone_management": false, 00:11:36.325 "zone_append": false, 00:11:36.325 "compare": false, 00:11:36.325 "compare_and_write": false, 00:11:36.325 "abort": true, 00:11:36.325 "seek_hole": false, 00:11:36.325 "seek_data": false, 00:11:36.325 "copy": true, 00:11:36.325 "nvme_iov_md": false 00:11:36.325 }, 00:11:36.325 "memory_domains": [ 00:11:36.325 { 00:11:36.325 "dma_device_id": "system", 00:11:36.325 "dma_device_type": 1 00:11:36.325 }, 00:11:36.325 { 00:11:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.325 "dma_device_type": 2 00:11:36.325 } 00:11:36.325 ], 00:11:36.325 "driver_specific": {} 00:11:36.325 } 00:11:36.325 ] 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.325 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.326 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.586 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.586 "name": "Existed_Raid", 00:11:36.586 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:36.586 "strip_size_kb": 64, 00:11:36.586 "state": "online", 00:11:36.586 "raid_level": "concat", 00:11:36.586 "superblock": true, 00:11:36.586 "num_base_bdevs": 4, 00:11:36.586 "num_base_bdevs_discovered": 4, 00:11:36.586 "num_base_bdevs_operational": 4, 00:11:36.586 "base_bdevs_list": [ 00:11:36.586 { 00:11:36.586 "name": "BaseBdev1", 00:11:36.586 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:36.586 "is_configured": true, 00:11:36.586 "data_offset": 2048, 00:11:36.586 "data_size": 63488 00:11:36.586 }, 00:11:36.586 { 00:11:36.586 "name": "BaseBdev2", 00:11:36.586 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:36.586 "is_configured": true, 00:11:36.586 "data_offset": 2048, 00:11:36.586 "data_size": 63488 00:11:36.586 }, 00:11:36.586 { 00:11:36.586 "name": "BaseBdev3", 00:11:36.586 "uuid": "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123", 00:11:36.586 "is_configured": true, 00:11:36.586 "data_offset": 2048, 00:11:36.586 "data_size": 63488 00:11:36.586 }, 00:11:36.586 { 00:11:36.586 "name": "BaseBdev4", 00:11:36.586 "uuid": "0b15cbb0-1bb7-4001-89ae-0c53e4e13c14", 00:11:36.586 "is_configured": true, 00:11:36.586 "data_offset": 2048, 00:11:36.586 "data_size": 63488 00:11:36.586 } 00:11:36.586 ] 00:11:36.586 }' 00:11:36.586 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.586 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.847 [2024-11-18 13:28:06.767553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.847 "name": "Existed_Raid", 00:11:36.847 "aliases": [ 00:11:36.847 "07b9c58d-026b-4351-9d1d-e2006317ba4a" 00:11:36.847 ], 00:11:36.847 "product_name": "Raid Volume", 00:11:36.847 "block_size": 512, 00:11:36.847 "num_blocks": 253952, 00:11:36.847 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:36.847 "assigned_rate_limits": { 00:11:36.847 "rw_ios_per_sec": 0, 00:11:36.847 "rw_mbytes_per_sec": 0, 00:11:36.847 "r_mbytes_per_sec": 0, 00:11:36.847 "w_mbytes_per_sec": 0 00:11:36.847 }, 00:11:36.847 "claimed": false, 00:11:36.847 "zoned": false, 00:11:36.847 "supported_io_types": { 00:11:36.847 "read": true, 00:11:36.847 "write": true, 00:11:36.847 "unmap": true, 00:11:36.847 "flush": true, 00:11:36.847 "reset": true, 00:11:36.847 "nvme_admin": false, 00:11:36.847 "nvme_io": false, 00:11:36.847 "nvme_io_md": false, 00:11:36.847 "write_zeroes": true, 00:11:36.847 "zcopy": false, 00:11:36.847 "get_zone_info": false, 00:11:36.847 "zone_management": false, 00:11:36.847 "zone_append": false, 00:11:36.847 "compare": false, 00:11:36.847 "compare_and_write": false, 00:11:36.847 "abort": false, 00:11:36.847 "seek_hole": false, 00:11:36.847 "seek_data": false, 00:11:36.847 "copy": false, 00:11:36.847 "nvme_iov_md": false 00:11:36.847 }, 00:11:36.847 "memory_domains": [ 00:11:36.847 { 00:11:36.847 "dma_device_id": "system", 00:11:36.847 "dma_device_type": 1 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.847 "dma_device_type": 2 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "system", 00:11:36.847 "dma_device_type": 1 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.847 "dma_device_type": 2 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "system", 00:11:36.847 "dma_device_type": 1 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.847 "dma_device_type": 2 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "system", 00:11:36.847 "dma_device_type": 1 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.847 "dma_device_type": 2 00:11:36.847 } 00:11:36.847 ], 00:11:36.847 "driver_specific": { 00:11:36.847 "raid": { 00:11:36.847 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:36.847 "strip_size_kb": 64, 00:11:36.847 "state": "online", 00:11:36.847 "raid_level": "concat", 00:11:36.847 "superblock": true, 00:11:36.847 "num_base_bdevs": 4, 00:11:36.847 "num_base_bdevs_discovered": 4, 00:11:36.847 "num_base_bdevs_operational": 4, 00:11:36.847 "base_bdevs_list": [ 00:11:36.847 { 00:11:36.847 "name": "BaseBdev1", 00:11:36.847 "uuid": "c9b43e31-58bd-4de1-bd69-f90d00cf685f", 00:11:36.847 "is_configured": true, 00:11:36.847 "data_offset": 2048, 00:11:36.847 "data_size": 63488 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "name": "BaseBdev2", 00:11:36.847 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:36.847 "is_configured": true, 00:11:36.847 "data_offset": 2048, 00:11:36.847 "data_size": 63488 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "name": "BaseBdev3", 00:11:36.847 "uuid": "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123", 00:11:36.847 "is_configured": true, 00:11:36.847 "data_offset": 2048, 00:11:36.847 "data_size": 63488 00:11:36.847 }, 00:11:36.847 { 00:11:36.847 "name": "BaseBdev4", 00:11:36.847 "uuid": "0b15cbb0-1bb7-4001-89ae-0c53e4e13c14", 00:11:36.847 "is_configured": true, 00:11:36.847 "data_offset": 2048, 00:11:36.847 "data_size": 63488 00:11:36.847 } 00:11:36.847 ] 00:11:36.847 } 00:11:36.847 } 00:11:36.847 }' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.847 BaseBdev2 00:11:36.847 BaseBdev3 00:11:36.847 BaseBdev4' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.847 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 13:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 [2024-11-18 13:28:07.026799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.108 [2024-11-18 13:28:07.026837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.108 [2024-11-18 13:28:07.026891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.369 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.369 "name": "Existed_Raid", 00:11:37.369 "uuid": "07b9c58d-026b-4351-9d1d-e2006317ba4a", 00:11:37.369 "strip_size_kb": 64, 00:11:37.369 "state": "offline", 00:11:37.369 "raid_level": "concat", 00:11:37.369 "superblock": true, 00:11:37.369 "num_base_bdevs": 4, 00:11:37.369 "num_base_bdevs_discovered": 3, 00:11:37.369 "num_base_bdevs_operational": 3, 00:11:37.369 "base_bdevs_list": [ 00:11:37.369 { 00:11:37.369 "name": null, 00:11:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.369 "is_configured": false, 00:11:37.369 "data_offset": 0, 00:11:37.369 "data_size": 63488 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "name": "BaseBdev2", 00:11:37.369 "uuid": "c6b9707a-5689-4861-ad68-302a6404de8d", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "name": "BaseBdev3", 00:11:37.369 "uuid": "d38ff3e1-79d2-4415-9e5f-2b4ec6a67123", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "name": "BaseBdev4", 00:11:37.369 "uuid": "0b15cbb0-1bb7-4001-89ae-0c53e4e13c14", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 } 00:11:37.369 ] 00:11:37.369 }' 00:11:37.369 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.369 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.629 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 [2024-11-18 13:28:07.650853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.896 [2024-11-18 13:28:07.802924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.896 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.197 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.197 13:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:38.197 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 [2024-11-18 13:28:07.956422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:38.197 [2024-11-18 13:28:07.956480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 BaseBdev2 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 [ 00:11:38.197 { 00:11:38.197 "name": "BaseBdev2", 00:11:38.197 "aliases": [ 00:11:38.197 "8cc5b822-03da-4970-85e1-8ab20b28381c" 00:11:38.197 ], 00:11:38.197 "product_name": "Malloc disk", 00:11:38.197 "block_size": 512, 00:11:38.197 "num_blocks": 65536, 00:11:38.197 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:38.197 "assigned_rate_limits": { 00:11:38.197 "rw_ios_per_sec": 0, 00:11:38.197 "rw_mbytes_per_sec": 0, 00:11:38.197 "r_mbytes_per_sec": 0, 00:11:38.197 "w_mbytes_per_sec": 0 00:11:38.197 }, 00:11:38.197 "claimed": false, 00:11:38.197 "zoned": false, 00:11:38.197 "supported_io_types": { 00:11:38.197 "read": true, 00:11:38.197 "write": true, 00:11:38.197 "unmap": true, 00:11:38.197 "flush": true, 00:11:38.197 "reset": true, 00:11:38.197 "nvme_admin": false, 00:11:38.197 "nvme_io": false, 00:11:38.197 "nvme_io_md": false, 00:11:38.197 "write_zeroes": true, 00:11:38.197 "zcopy": true, 00:11:38.197 "get_zone_info": false, 00:11:38.197 "zone_management": false, 00:11:38.197 "zone_append": false, 00:11:38.197 "compare": false, 00:11:38.197 "compare_and_write": false, 00:11:38.197 "abort": true, 00:11:38.197 "seek_hole": false, 00:11:38.197 "seek_data": false, 00:11:38.197 "copy": true, 00:11:38.197 "nvme_iov_md": false 00:11:38.197 }, 00:11:38.197 "memory_domains": [ 00:11:38.197 { 00:11:38.197 "dma_device_id": "system", 00:11:38.197 "dma_device_type": 1 00:11:38.197 }, 00:11:38.197 { 00:11:38.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.197 "dma_device_type": 2 00:11:38.197 } 00:11:38.197 ], 00:11:38.197 "driver_specific": {} 00:11:38.197 } 00:11:38.197 ] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 BaseBdev3 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.197 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.457 [ 00:11:38.457 { 00:11:38.457 "name": "BaseBdev3", 00:11:38.457 "aliases": [ 00:11:38.457 "0966d0f6-e6bb-4369-ba98-515a990eac6e" 00:11:38.457 ], 00:11:38.457 "product_name": "Malloc disk", 00:11:38.457 "block_size": 512, 00:11:38.457 "num_blocks": 65536, 00:11:38.457 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:38.457 "assigned_rate_limits": { 00:11:38.457 "rw_ios_per_sec": 0, 00:11:38.457 "rw_mbytes_per_sec": 0, 00:11:38.457 "r_mbytes_per_sec": 0, 00:11:38.457 "w_mbytes_per_sec": 0 00:11:38.457 }, 00:11:38.457 "claimed": false, 00:11:38.457 "zoned": false, 00:11:38.457 "supported_io_types": { 00:11:38.457 "read": true, 00:11:38.457 "write": true, 00:11:38.457 "unmap": true, 00:11:38.457 "flush": true, 00:11:38.457 "reset": true, 00:11:38.457 "nvme_admin": false, 00:11:38.457 "nvme_io": false, 00:11:38.457 "nvme_io_md": false, 00:11:38.457 "write_zeroes": true, 00:11:38.457 "zcopy": true, 00:11:38.457 "get_zone_info": false, 00:11:38.457 "zone_management": false, 00:11:38.457 "zone_append": false, 00:11:38.457 "compare": false, 00:11:38.457 "compare_and_write": false, 00:11:38.457 "abort": true, 00:11:38.457 "seek_hole": false, 00:11:38.457 "seek_data": false, 00:11:38.457 "copy": true, 00:11:38.457 "nvme_iov_md": false 00:11:38.457 }, 00:11:38.457 "memory_domains": [ 00:11:38.457 { 00:11:38.457 "dma_device_id": "system", 00:11:38.457 "dma_device_type": 1 00:11:38.457 }, 00:11:38.457 { 00:11:38.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.457 "dma_device_type": 2 00:11:38.457 } 00:11:38.457 ], 00:11:38.457 "driver_specific": {} 00:11:38.457 } 00:11:38.457 ] 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.457 BaseBdev4 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.457 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 [ 00:11:38.458 { 00:11:38.458 "name": "BaseBdev4", 00:11:38.458 "aliases": [ 00:11:38.458 "b25b8fa1-f000-4d07-ac0f-dae550181133" 00:11:38.458 ], 00:11:38.458 "product_name": "Malloc disk", 00:11:38.458 "block_size": 512, 00:11:38.458 "num_blocks": 65536, 00:11:38.458 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:38.458 "assigned_rate_limits": { 00:11:38.458 "rw_ios_per_sec": 0, 00:11:38.458 "rw_mbytes_per_sec": 0, 00:11:38.458 "r_mbytes_per_sec": 0, 00:11:38.458 "w_mbytes_per_sec": 0 00:11:38.458 }, 00:11:38.458 "claimed": false, 00:11:38.458 "zoned": false, 00:11:38.458 "supported_io_types": { 00:11:38.458 "read": true, 00:11:38.458 "write": true, 00:11:38.458 "unmap": true, 00:11:38.458 "flush": true, 00:11:38.458 "reset": true, 00:11:38.458 "nvme_admin": false, 00:11:38.458 "nvme_io": false, 00:11:38.458 "nvme_io_md": false, 00:11:38.458 "write_zeroes": true, 00:11:38.458 "zcopy": true, 00:11:38.458 "get_zone_info": false, 00:11:38.458 "zone_management": false, 00:11:38.458 "zone_append": false, 00:11:38.458 "compare": false, 00:11:38.458 "compare_and_write": false, 00:11:38.458 "abort": true, 00:11:38.458 "seek_hole": false, 00:11:38.458 "seek_data": false, 00:11:38.458 "copy": true, 00:11:38.458 "nvme_iov_md": false 00:11:38.458 }, 00:11:38.458 "memory_domains": [ 00:11:38.458 { 00:11:38.458 "dma_device_id": "system", 00:11:38.458 "dma_device_type": 1 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.458 "dma_device_type": 2 00:11:38.458 } 00:11:38.458 ], 00:11:38.458 "driver_specific": {} 00:11:38.458 } 00:11:38.458 ] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 [2024-11-18 13:28:08.354357] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.458 [2024-11-18 13:28:08.354406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.458 [2024-11-18 13:28:08.354426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.458 [2024-11-18 13:28:08.356189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.458 [2024-11-18 13:28:08.356238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.458 "name": "Existed_Raid", 00:11:38.458 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:38.458 "strip_size_kb": 64, 00:11:38.458 "state": "configuring", 00:11:38.458 "raid_level": "concat", 00:11:38.458 "superblock": true, 00:11:38.458 "num_base_bdevs": 4, 00:11:38.458 "num_base_bdevs_discovered": 3, 00:11:38.458 "num_base_bdevs_operational": 4, 00:11:38.458 "base_bdevs_list": [ 00:11:38.458 { 00:11:38.458 "name": "BaseBdev1", 00:11:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.458 "is_configured": false, 00:11:38.458 "data_offset": 0, 00:11:38.458 "data_size": 0 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "BaseBdev2", 00:11:38.458 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "BaseBdev3", 00:11:38.458 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 }, 00:11:38.458 { 00:11:38.458 "name": "BaseBdev4", 00:11:38.458 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:38.458 "is_configured": true, 00:11:38.458 "data_offset": 2048, 00:11:38.458 "data_size": 63488 00:11:38.458 } 00:11:38.458 ] 00:11:38.458 }' 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.458 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.028 [2024-11-18 13:28:08.809595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.028 "name": "Existed_Raid", 00:11:39.028 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:39.028 "strip_size_kb": 64, 00:11:39.028 "state": "configuring", 00:11:39.028 "raid_level": "concat", 00:11:39.028 "superblock": true, 00:11:39.028 "num_base_bdevs": 4, 00:11:39.028 "num_base_bdevs_discovered": 2, 00:11:39.028 "num_base_bdevs_operational": 4, 00:11:39.028 "base_bdevs_list": [ 00:11:39.028 { 00:11:39.028 "name": "BaseBdev1", 00:11:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.028 "is_configured": false, 00:11:39.028 "data_offset": 0, 00:11:39.028 "data_size": 0 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": null, 00:11:39.028 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:39.028 "is_configured": false, 00:11:39.028 "data_offset": 0, 00:11:39.028 "data_size": 63488 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": "BaseBdev3", 00:11:39.028 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:39.028 "is_configured": true, 00:11:39.028 "data_offset": 2048, 00:11:39.028 "data_size": 63488 00:11:39.028 }, 00:11:39.028 { 00:11:39.028 "name": "BaseBdev4", 00:11:39.028 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:39.028 "is_configured": true, 00:11:39.028 "data_offset": 2048, 00:11:39.028 "data_size": 63488 00:11:39.028 } 00:11:39.028 ] 00:11:39.028 }' 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.028 13:28:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 [2024-11-18 13:28:09.290201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.287 BaseBdev1 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.287 [ 00:11:39.287 { 00:11:39.287 "name": "BaseBdev1", 00:11:39.287 "aliases": [ 00:11:39.287 "32f6c705-792e-44d0-b11c-7b838bea43d5" 00:11:39.287 ], 00:11:39.287 "product_name": "Malloc disk", 00:11:39.287 "block_size": 512, 00:11:39.287 "num_blocks": 65536, 00:11:39.287 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:39.287 "assigned_rate_limits": { 00:11:39.287 "rw_ios_per_sec": 0, 00:11:39.287 "rw_mbytes_per_sec": 0, 00:11:39.287 "r_mbytes_per_sec": 0, 00:11:39.287 "w_mbytes_per_sec": 0 00:11:39.287 }, 00:11:39.287 "claimed": true, 00:11:39.287 "claim_type": "exclusive_write", 00:11:39.287 "zoned": false, 00:11:39.287 "supported_io_types": { 00:11:39.287 "read": true, 00:11:39.287 "write": true, 00:11:39.287 "unmap": true, 00:11:39.287 "flush": true, 00:11:39.287 "reset": true, 00:11:39.287 "nvme_admin": false, 00:11:39.287 "nvme_io": false, 00:11:39.287 "nvme_io_md": false, 00:11:39.287 "write_zeroes": true, 00:11:39.287 "zcopy": true, 00:11:39.287 "get_zone_info": false, 00:11:39.287 "zone_management": false, 00:11:39.287 "zone_append": false, 00:11:39.287 "compare": false, 00:11:39.287 "compare_and_write": false, 00:11:39.287 "abort": true, 00:11:39.287 "seek_hole": false, 00:11:39.287 "seek_data": false, 00:11:39.287 "copy": true, 00:11:39.287 "nvme_iov_md": false 00:11:39.287 }, 00:11:39.287 "memory_domains": [ 00:11:39.287 { 00:11:39.287 "dma_device_id": "system", 00:11:39.287 "dma_device_type": 1 00:11:39.287 }, 00:11:39.287 { 00:11:39.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.287 "dma_device_type": 2 00:11:39.287 } 00:11:39.287 ], 00:11:39.287 "driver_specific": {} 00:11:39.287 } 00:11:39.287 ] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.287 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.547 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.547 "name": "Existed_Raid", 00:11:39.547 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:39.547 "strip_size_kb": 64, 00:11:39.547 "state": "configuring", 00:11:39.547 "raid_level": "concat", 00:11:39.547 "superblock": true, 00:11:39.547 "num_base_bdevs": 4, 00:11:39.547 "num_base_bdevs_discovered": 3, 00:11:39.547 "num_base_bdevs_operational": 4, 00:11:39.547 "base_bdevs_list": [ 00:11:39.547 { 00:11:39.547 "name": "BaseBdev1", 00:11:39.547 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:39.547 "is_configured": true, 00:11:39.547 "data_offset": 2048, 00:11:39.547 "data_size": 63488 00:11:39.547 }, 00:11:39.547 { 00:11:39.547 "name": null, 00:11:39.547 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:39.547 "is_configured": false, 00:11:39.547 "data_offset": 0, 00:11:39.547 "data_size": 63488 00:11:39.547 }, 00:11:39.547 { 00:11:39.547 "name": "BaseBdev3", 00:11:39.547 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:39.547 "is_configured": true, 00:11:39.547 "data_offset": 2048, 00:11:39.547 "data_size": 63488 00:11:39.547 }, 00:11:39.547 { 00:11:39.547 "name": "BaseBdev4", 00:11:39.547 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:39.547 "is_configured": true, 00:11:39.547 "data_offset": 2048, 00:11:39.547 "data_size": 63488 00:11:39.547 } 00:11:39.547 ] 00:11:39.547 }' 00:11:39.547 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.547 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.807 [2024-11-18 13:28:09.833325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.807 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.067 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.067 "name": "Existed_Raid", 00:11:40.067 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:40.067 "strip_size_kb": 64, 00:11:40.067 "state": "configuring", 00:11:40.067 "raid_level": "concat", 00:11:40.067 "superblock": true, 00:11:40.067 "num_base_bdevs": 4, 00:11:40.067 "num_base_bdevs_discovered": 2, 00:11:40.067 "num_base_bdevs_operational": 4, 00:11:40.067 "base_bdevs_list": [ 00:11:40.067 { 00:11:40.067 "name": "BaseBdev1", 00:11:40.067 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:40.067 "is_configured": true, 00:11:40.067 "data_offset": 2048, 00:11:40.067 "data_size": 63488 00:11:40.067 }, 00:11:40.067 { 00:11:40.067 "name": null, 00:11:40.067 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:40.067 "is_configured": false, 00:11:40.067 "data_offset": 0, 00:11:40.067 "data_size": 63488 00:11:40.067 }, 00:11:40.067 { 00:11:40.067 "name": null, 00:11:40.067 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:40.067 "is_configured": false, 00:11:40.067 "data_offset": 0, 00:11:40.067 "data_size": 63488 00:11:40.067 }, 00:11:40.067 { 00:11:40.067 "name": "BaseBdev4", 00:11:40.067 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:40.067 "is_configured": true, 00:11:40.067 "data_offset": 2048, 00:11:40.067 "data_size": 63488 00:11:40.068 } 00:11:40.068 ] 00:11:40.068 }' 00:11:40.068 13:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.068 13:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.328 [2024-11-18 13:28:10.372411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.328 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.588 "name": "Existed_Raid", 00:11:40.588 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:40.588 "strip_size_kb": 64, 00:11:40.588 "state": "configuring", 00:11:40.588 "raid_level": "concat", 00:11:40.588 "superblock": true, 00:11:40.588 "num_base_bdevs": 4, 00:11:40.588 "num_base_bdevs_discovered": 3, 00:11:40.588 "num_base_bdevs_operational": 4, 00:11:40.588 "base_bdevs_list": [ 00:11:40.588 { 00:11:40.588 "name": "BaseBdev1", 00:11:40.588 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:40.588 "is_configured": true, 00:11:40.588 "data_offset": 2048, 00:11:40.588 "data_size": 63488 00:11:40.588 }, 00:11:40.588 { 00:11:40.588 "name": null, 00:11:40.588 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:40.588 "is_configured": false, 00:11:40.588 "data_offset": 0, 00:11:40.588 "data_size": 63488 00:11:40.588 }, 00:11:40.588 { 00:11:40.588 "name": "BaseBdev3", 00:11:40.588 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:40.588 "is_configured": true, 00:11:40.588 "data_offset": 2048, 00:11:40.588 "data_size": 63488 00:11:40.588 }, 00:11:40.588 { 00:11:40.588 "name": "BaseBdev4", 00:11:40.588 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:40.588 "is_configured": true, 00:11:40.588 "data_offset": 2048, 00:11:40.588 "data_size": 63488 00:11:40.588 } 00:11:40.588 ] 00:11:40.588 }' 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.588 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.848 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.849 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.849 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.849 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.849 [2024-11-18 13:28:10.883554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.108 13:28:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.109 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.109 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.109 "name": "Existed_Raid", 00:11:41.109 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:41.109 "strip_size_kb": 64, 00:11:41.109 "state": "configuring", 00:11:41.109 "raid_level": "concat", 00:11:41.109 "superblock": true, 00:11:41.109 "num_base_bdevs": 4, 00:11:41.109 "num_base_bdevs_discovered": 2, 00:11:41.109 "num_base_bdevs_operational": 4, 00:11:41.109 "base_bdevs_list": [ 00:11:41.109 { 00:11:41.109 "name": null, 00:11:41.109 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:41.109 "is_configured": false, 00:11:41.109 "data_offset": 0, 00:11:41.109 "data_size": 63488 00:11:41.109 }, 00:11:41.109 { 00:11:41.109 "name": null, 00:11:41.109 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:41.109 "is_configured": false, 00:11:41.109 "data_offset": 0, 00:11:41.109 "data_size": 63488 00:11:41.109 }, 00:11:41.109 { 00:11:41.109 "name": "BaseBdev3", 00:11:41.109 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:41.109 "is_configured": true, 00:11:41.109 "data_offset": 2048, 00:11:41.109 "data_size": 63488 00:11:41.109 }, 00:11:41.109 { 00:11:41.109 "name": "BaseBdev4", 00:11:41.109 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:41.109 "is_configured": true, 00:11:41.109 "data_offset": 2048, 00:11:41.109 "data_size": 63488 00:11:41.109 } 00:11:41.109 ] 00:11:41.109 }' 00:11:41.109 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.109 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.394 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.394 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.394 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.394 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.654 [2024-11-18 13:28:11.487308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.654 "name": "Existed_Raid", 00:11:41.654 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:41.654 "strip_size_kb": 64, 00:11:41.654 "state": "configuring", 00:11:41.654 "raid_level": "concat", 00:11:41.654 "superblock": true, 00:11:41.654 "num_base_bdevs": 4, 00:11:41.654 "num_base_bdevs_discovered": 3, 00:11:41.654 "num_base_bdevs_operational": 4, 00:11:41.654 "base_bdevs_list": [ 00:11:41.654 { 00:11:41.654 "name": null, 00:11:41.654 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:41.654 "is_configured": false, 00:11:41.654 "data_offset": 0, 00:11:41.654 "data_size": 63488 00:11:41.654 }, 00:11:41.654 { 00:11:41.654 "name": "BaseBdev2", 00:11:41.654 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:41.654 "is_configured": true, 00:11:41.654 "data_offset": 2048, 00:11:41.654 "data_size": 63488 00:11:41.654 }, 00:11:41.654 { 00:11:41.654 "name": "BaseBdev3", 00:11:41.654 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:41.654 "is_configured": true, 00:11:41.654 "data_offset": 2048, 00:11:41.654 "data_size": 63488 00:11:41.654 }, 00:11:41.654 { 00:11:41.654 "name": "BaseBdev4", 00:11:41.654 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:41.654 "is_configured": true, 00:11:41.654 "data_offset": 2048, 00:11:41.654 "data_size": 63488 00:11:41.654 } 00:11:41.654 ] 00:11:41.654 }' 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.654 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.914 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.174 13:28:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 32f6c705-792e-44d0-b11c-7b838bea43d5 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.174 [2024-11-18 13:28:12.048011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:42.174 [2024-11-18 13:28:12.048366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.174 [2024-11-18 13:28:12.048413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:42.174 [2024-11-18 13:28:12.048680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:42.174 [2024-11-18 13:28:12.048850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.174 [2024-11-18 13:28:12.048894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:42.174 NewBaseBdev 00:11:42.174 [2024-11-18 13:28:12.049069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.174 [ 00:11:42.174 { 00:11:42.174 "name": "NewBaseBdev", 00:11:42.174 "aliases": [ 00:11:42.174 "32f6c705-792e-44d0-b11c-7b838bea43d5" 00:11:42.174 ], 00:11:42.174 "product_name": "Malloc disk", 00:11:42.174 "block_size": 512, 00:11:42.174 "num_blocks": 65536, 00:11:42.174 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:42.174 "assigned_rate_limits": { 00:11:42.174 "rw_ios_per_sec": 0, 00:11:42.174 "rw_mbytes_per_sec": 0, 00:11:42.174 "r_mbytes_per_sec": 0, 00:11:42.174 "w_mbytes_per_sec": 0 00:11:42.174 }, 00:11:42.174 "claimed": true, 00:11:42.174 "claim_type": "exclusive_write", 00:11:42.174 "zoned": false, 00:11:42.174 "supported_io_types": { 00:11:42.174 "read": true, 00:11:42.174 "write": true, 00:11:42.174 "unmap": true, 00:11:42.174 "flush": true, 00:11:42.174 "reset": true, 00:11:42.174 "nvme_admin": false, 00:11:42.174 "nvme_io": false, 00:11:42.174 "nvme_io_md": false, 00:11:42.174 "write_zeroes": true, 00:11:42.174 "zcopy": true, 00:11:42.174 "get_zone_info": false, 00:11:42.174 "zone_management": false, 00:11:42.174 "zone_append": false, 00:11:42.174 "compare": false, 00:11:42.174 "compare_and_write": false, 00:11:42.174 "abort": true, 00:11:42.174 "seek_hole": false, 00:11:42.174 "seek_data": false, 00:11:42.174 "copy": true, 00:11:42.174 "nvme_iov_md": false 00:11:42.174 }, 00:11:42.174 "memory_domains": [ 00:11:42.174 { 00:11:42.174 "dma_device_id": "system", 00:11:42.174 "dma_device_type": 1 00:11:42.174 }, 00:11:42.174 { 00:11:42.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.174 "dma_device_type": 2 00:11:42.174 } 00:11:42.174 ], 00:11:42.174 "driver_specific": {} 00:11:42.174 } 00:11:42.174 ] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.174 "name": "Existed_Raid", 00:11:42.174 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:42.174 "strip_size_kb": 64, 00:11:42.174 "state": "online", 00:11:42.174 "raid_level": "concat", 00:11:42.174 "superblock": true, 00:11:42.174 "num_base_bdevs": 4, 00:11:42.174 "num_base_bdevs_discovered": 4, 00:11:42.174 "num_base_bdevs_operational": 4, 00:11:42.174 "base_bdevs_list": [ 00:11:42.174 { 00:11:42.174 "name": "NewBaseBdev", 00:11:42.174 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:42.174 "is_configured": true, 00:11:42.174 "data_offset": 2048, 00:11:42.174 "data_size": 63488 00:11:42.174 }, 00:11:42.174 { 00:11:42.174 "name": "BaseBdev2", 00:11:42.174 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:42.174 "is_configured": true, 00:11:42.174 "data_offset": 2048, 00:11:42.174 "data_size": 63488 00:11:42.174 }, 00:11:42.174 { 00:11:42.174 "name": "BaseBdev3", 00:11:42.174 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:42.174 "is_configured": true, 00:11:42.174 "data_offset": 2048, 00:11:42.174 "data_size": 63488 00:11:42.174 }, 00:11:42.174 { 00:11:42.174 "name": "BaseBdev4", 00:11:42.174 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:42.174 "is_configured": true, 00:11:42.174 "data_offset": 2048, 00:11:42.174 "data_size": 63488 00:11:42.174 } 00:11:42.174 ] 00:11:42.174 }' 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.174 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.746 [2024-11-18 13:28:12.555661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.746 "name": "Existed_Raid", 00:11:42.746 "aliases": [ 00:11:42.746 "3ebb4214-0fd4-4947-9ad2-94652aa46111" 00:11:42.746 ], 00:11:42.746 "product_name": "Raid Volume", 00:11:42.746 "block_size": 512, 00:11:42.746 "num_blocks": 253952, 00:11:42.746 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:42.746 "assigned_rate_limits": { 00:11:42.746 "rw_ios_per_sec": 0, 00:11:42.746 "rw_mbytes_per_sec": 0, 00:11:42.746 "r_mbytes_per_sec": 0, 00:11:42.746 "w_mbytes_per_sec": 0 00:11:42.746 }, 00:11:42.746 "claimed": false, 00:11:42.746 "zoned": false, 00:11:42.746 "supported_io_types": { 00:11:42.746 "read": true, 00:11:42.746 "write": true, 00:11:42.746 "unmap": true, 00:11:42.746 "flush": true, 00:11:42.746 "reset": true, 00:11:42.746 "nvme_admin": false, 00:11:42.746 "nvme_io": false, 00:11:42.746 "nvme_io_md": false, 00:11:42.746 "write_zeroes": true, 00:11:42.746 "zcopy": false, 00:11:42.746 "get_zone_info": false, 00:11:42.746 "zone_management": false, 00:11:42.746 "zone_append": false, 00:11:42.746 "compare": false, 00:11:42.746 "compare_and_write": false, 00:11:42.746 "abort": false, 00:11:42.746 "seek_hole": false, 00:11:42.746 "seek_data": false, 00:11:42.746 "copy": false, 00:11:42.746 "nvme_iov_md": false 00:11:42.746 }, 00:11:42.746 "memory_domains": [ 00:11:42.746 { 00:11:42.746 "dma_device_id": "system", 00:11:42.746 "dma_device_type": 1 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.746 "dma_device_type": 2 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "system", 00:11:42.746 "dma_device_type": 1 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.746 "dma_device_type": 2 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "system", 00:11:42.746 "dma_device_type": 1 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.746 "dma_device_type": 2 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "system", 00:11:42.746 "dma_device_type": 1 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.746 "dma_device_type": 2 00:11:42.746 } 00:11:42.746 ], 00:11:42.746 "driver_specific": { 00:11:42.746 "raid": { 00:11:42.746 "uuid": "3ebb4214-0fd4-4947-9ad2-94652aa46111", 00:11:42.746 "strip_size_kb": 64, 00:11:42.746 "state": "online", 00:11:42.746 "raid_level": "concat", 00:11:42.746 "superblock": true, 00:11:42.746 "num_base_bdevs": 4, 00:11:42.746 "num_base_bdevs_discovered": 4, 00:11:42.746 "num_base_bdevs_operational": 4, 00:11:42.746 "base_bdevs_list": [ 00:11:42.746 { 00:11:42.746 "name": "NewBaseBdev", 00:11:42.746 "uuid": "32f6c705-792e-44d0-b11c-7b838bea43d5", 00:11:42.746 "is_configured": true, 00:11:42.746 "data_offset": 2048, 00:11:42.746 "data_size": 63488 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "name": "BaseBdev2", 00:11:42.746 "uuid": "8cc5b822-03da-4970-85e1-8ab20b28381c", 00:11:42.746 "is_configured": true, 00:11:42.746 "data_offset": 2048, 00:11:42.746 "data_size": 63488 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "name": "BaseBdev3", 00:11:42.746 "uuid": "0966d0f6-e6bb-4369-ba98-515a990eac6e", 00:11:42.746 "is_configured": true, 00:11:42.746 "data_offset": 2048, 00:11:42.746 "data_size": 63488 00:11:42.746 }, 00:11:42.746 { 00:11:42.746 "name": "BaseBdev4", 00:11:42.746 "uuid": "b25b8fa1-f000-4d07-ac0f-dae550181133", 00:11:42.746 "is_configured": true, 00:11:42.746 "data_offset": 2048, 00:11:42.746 "data_size": 63488 00:11:42.746 } 00:11:42.746 ] 00:11:42.746 } 00:11:42.746 } 00:11:42.746 }' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.746 BaseBdev2 00:11:42.746 BaseBdev3 00:11:42.746 BaseBdev4' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.746 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.747 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.007 [2024-11-18 13:28:12.854727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.007 [2024-11-18 13:28:12.854771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.007 [2024-11-18 13:28:12.854870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.007 [2024-11-18 13:28:12.854962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.007 [2024-11-18 13:28:12.854975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71958 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71958 ']' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71958 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71958 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71958' 00:11:43.007 killing process with pid 71958 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71958 00:11:43.007 [2024-11-18 13:28:12.900360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.007 13:28:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71958 00:11:43.576 [2024-11-18 13:28:13.326493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.516 13:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.516 00:11:44.516 real 0m11.718s 00:11:44.516 user 0m18.483s 00:11:44.516 sys 0m2.120s 00:11:44.516 13:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.516 ************************************ 00:11:44.516 END TEST raid_state_function_test_sb 00:11:44.516 ************************************ 00:11:44.516 13:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.782 13:28:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:44.782 13:28:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.782 13:28:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.782 13:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.782 ************************************ 00:11:44.782 START TEST raid_superblock_test 00:11:44.782 ************************************ 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72629 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72629 00:11:44.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72629 ']' 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.782 13:28:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.782 [2024-11-18 13:28:14.750950] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:44.782 [2024-11-18 13:28:14.751095] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72629 ] 00:11:45.042 [2024-11-18 13:28:14.916943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.042 [2024-11-18 13:28:15.067910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.302 [2024-11-18 13:28:15.304052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.302 [2024-11-18 13:28:15.304292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.565 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 malloc1 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 [2024-11-18 13:28:15.640421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.825 [2024-11-18 13:28:15.640554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.825 [2024-11-18 13:28:15.640604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.825 [2024-11-18 13:28:15.640660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.825 [2024-11-18 13:28:15.643119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.825 [2024-11-18 13:28:15.643218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.825 pt1 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 malloc2 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 [2024-11-18 13:28:15.706846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.825 [2024-11-18 13:28:15.707018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.825 [2024-11-18 13:28:15.707072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.825 [2024-11-18 13:28:15.707092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.825 [2024-11-18 13:28:15.710193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.825 [2024-11-18 13:28:15.710242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.825 pt2 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 malloc3 00:11:45.825 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 [2024-11-18 13:28:15.783913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.826 [2024-11-18 13:28:15.784036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.826 [2024-11-18 13:28:15.784087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.826 [2024-11-18 13:28:15.784141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.826 [2024-11-18 13:28:15.786799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.826 [2024-11-18 13:28:15.786888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.826 pt3 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 malloc4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 [2024-11-18 13:28:15.852376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.826 [2024-11-18 13:28:15.852548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.826 [2024-11-18 13:28:15.852631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:45.826 [2024-11-18 13:28:15.852701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.826 [2024-11-18 13:28:15.855992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.826 [2024-11-18 13:28:15.856123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.826 pt4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 [2024-11-18 13:28:15.864423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.826 [2024-11-18 13:28:15.866815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.826 [2024-11-18 13:28:15.866974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.826 [2024-11-18 13:28:15.867085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.826 [2024-11-18 13:28:15.867374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.826 [2024-11-18 13:28:15.867438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.826 [2024-11-18 13:28:15.867839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.826 [2024-11-18 13:28:15.868118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.826 [2024-11-18 13:28:15.868194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.826 [2024-11-18 13:28:15.868511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.826 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.085 "name": "raid_bdev1", 00:11:46.085 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:46.085 "strip_size_kb": 64, 00:11:46.085 "state": "online", 00:11:46.085 "raid_level": "concat", 00:11:46.085 "superblock": true, 00:11:46.085 "num_base_bdevs": 4, 00:11:46.085 "num_base_bdevs_discovered": 4, 00:11:46.085 "num_base_bdevs_operational": 4, 00:11:46.085 "base_bdevs_list": [ 00:11:46.085 { 00:11:46.085 "name": "pt1", 00:11:46.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.085 "is_configured": true, 00:11:46.085 "data_offset": 2048, 00:11:46.085 "data_size": 63488 00:11:46.085 }, 00:11:46.085 { 00:11:46.085 "name": "pt2", 00:11:46.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.085 "is_configured": true, 00:11:46.085 "data_offset": 2048, 00:11:46.085 "data_size": 63488 00:11:46.085 }, 00:11:46.085 { 00:11:46.085 "name": "pt3", 00:11:46.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.085 "is_configured": true, 00:11:46.085 "data_offset": 2048, 00:11:46.085 "data_size": 63488 00:11:46.085 }, 00:11:46.085 { 00:11:46.085 "name": "pt4", 00:11:46.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.085 "is_configured": true, 00:11:46.085 "data_offset": 2048, 00:11:46.085 "data_size": 63488 00:11:46.085 } 00:11:46.085 ] 00:11:46.085 }' 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.085 13:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 [2024-11-18 13:28:16.328043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.346 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.346 "name": "raid_bdev1", 00:11:46.346 "aliases": [ 00:11:46.346 "c35ab4d2-0851-4b2b-95cd-706d2ddaf767" 00:11:46.346 ], 00:11:46.346 "product_name": "Raid Volume", 00:11:46.346 "block_size": 512, 00:11:46.346 "num_blocks": 253952, 00:11:46.346 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:46.346 "assigned_rate_limits": { 00:11:46.346 "rw_ios_per_sec": 0, 00:11:46.346 "rw_mbytes_per_sec": 0, 00:11:46.346 "r_mbytes_per_sec": 0, 00:11:46.346 "w_mbytes_per_sec": 0 00:11:46.346 }, 00:11:46.346 "claimed": false, 00:11:46.346 "zoned": false, 00:11:46.346 "supported_io_types": { 00:11:46.346 "read": true, 00:11:46.346 "write": true, 00:11:46.346 "unmap": true, 00:11:46.346 "flush": true, 00:11:46.346 "reset": true, 00:11:46.346 "nvme_admin": false, 00:11:46.346 "nvme_io": false, 00:11:46.346 "nvme_io_md": false, 00:11:46.346 "write_zeroes": true, 00:11:46.346 "zcopy": false, 00:11:46.346 "get_zone_info": false, 00:11:46.346 "zone_management": false, 00:11:46.346 "zone_append": false, 00:11:46.346 "compare": false, 00:11:46.346 "compare_and_write": false, 00:11:46.346 "abort": false, 00:11:46.346 "seek_hole": false, 00:11:46.346 "seek_data": false, 00:11:46.346 "copy": false, 00:11:46.346 "nvme_iov_md": false 00:11:46.346 }, 00:11:46.346 "memory_domains": [ 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "system", 00:11:46.346 "dma_device_type": 1 00:11:46.346 }, 00:11:46.346 { 00:11:46.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.346 "dma_device_type": 2 00:11:46.346 } 00:11:46.346 ], 00:11:46.347 "driver_specific": { 00:11:46.347 "raid": { 00:11:46.347 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:46.347 "strip_size_kb": 64, 00:11:46.347 "state": "online", 00:11:46.347 "raid_level": "concat", 00:11:46.347 "superblock": true, 00:11:46.347 "num_base_bdevs": 4, 00:11:46.347 "num_base_bdevs_discovered": 4, 00:11:46.347 "num_base_bdevs_operational": 4, 00:11:46.347 "base_bdevs_list": [ 00:11:46.347 { 00:11:46.347 "name": "pt1", 00:11:46.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.347 "is_configured": true, 00:11:46.347 "data_offset": 2048, 00:11:46.347 "data_size": 63488 00:11:46.347 }, 00:11:46.347 { 00:11:46.347 "name": "pt2", 00:11:46.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.347 "is_configured": true, 00:11:46.347 "data_offset": 2048, 00:11:46.347 "data_size": 63488 00:11:46.347 }, 00:11:46.347 { 00:11:46.347 "name": "pt3", 00:11:46.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.347 "is_configured": true, 00:11:46.347 "data_offset": 2048, 00:11:46.347 "data_size": 63488 00:11:46.347 }, 00:11:46.347 { 00:11:46.347 "name": "pt4", 00:11:46.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.347 "is_configured": true, 00:11:46.347 "data_offset": 2048, 00:11:46.347 "data_size": 63488 00:11:46.347 } 00:11:46.347 ] 00:11:46.347 } 00:11:46.347 } 00:11:46.347 }' 00:11:46.347 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.607 pt2 00:11:46.607 pt3 00:11:46.607 pt4' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.607 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.868 [2024-11-18 13:28:16.659572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c35ab4d2-0851-4b2b-95cd-706d2ddaf767 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c35ab4d2-0851-4b2b-95cd-706d2ddaf767 ']' 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 [2024-11-18 13:28:16.707110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.868 [2024-11-18 13:28:16.707187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.868 [2024-11-18 13:28:16.707357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.868 [2024-11-18 13:28:16.707507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.868 [2024-11-18 13:28:16.707550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.868 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.868 [2024-11-18 13:28:16.862895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.868 [2024-11-18 13:28:16.865229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.868 [2024-11-18 13:28:16.865297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.868 [2024-11-18 13:28:16.865338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:46.868 [2024-11-18 13:28:16.865415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.868 [2024-11-18 13:28:16.865492] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.868 [2024-11-18 13:28:16.865517] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.868 [2024-11-18 13:28:16.865540] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:46.868 [2024-11-18 13:28:16.865556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.869 [2024-11-18 13:28:16.865570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.869 request: 00:11:46.869 { 00:11:46.869 "name": "raid_bdev1", 00:11:46.869 "raid_level": "concat", 00:11:46.869 "base_bdevs": [ 00:11:46.869 "malloc1", 00:11:46.869 "malloc2", 00:11:46.869 "malloc3", 00:11:46.869 "malloc4" 00:11:46.869 ], 00:11:46.869 "strip_size_kb": 64, 00:11:46.869 "superblock": false, 00:11:46.869 "method": "bdev_raid_create", 00:11:46.869 "req_id": 1 00:11:46.869 } 00:11:46.869 Got JSON-RPC error response 00:11:46.869 response: 00:11:46.869 { 00:11:46.869 "code": -17, 00:11:46.869 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.869 } 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.869 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 [2024-11-18 13:28:16.926761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.129 [2024-11-18 13:28:16.926865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.129 [2024-11-18 13:28:16.926889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:47.129 [2024-11-18 13:28:16.926903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.129 [2024-11-18 13:28:16.929652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.129 [2024-11-18 13:28:16.929715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.129 [2024-11-18 13:28:16.929844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:47.129 [2024-11-18 13:28:16.929927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.129 pt1 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.129 "name": "raid_bdev1", 00:11:47.129 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:47.129 "strip_size_kb": 64, 00:11:47.129 "state": "configuring", 00:11:47.129 "raid_level": "concat", 00:11:47.129 "superblock": true, 00:11:47.129 "num_base_bdevs": 4, 00:11:47.129 "num_base_bdevs_discovered": 1, 00:11:47.129 "num_base_bdevs_operational": 4, 00:11:47.129 "base_bdevs_list": [ 00:11:47.129 { 00:11:47.129 "name": "pt1", 00:11:47.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.129 "is_configured": true, 00:11:47.129 "data_offset": 2048, 00:11:47.129 "data_size": 63488 00:11:47.129 }, 00:11:47.129 { 00:11:47.129 "name": null, 00:11:47.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.129 "is_configured": false, 00:11:47.129 "data_offset": 2048, 00:11:47.129 "data_size": 63488 00:11:47.129 }, 00:11:47.129 { 00:11:47.129 "name": null, 00:11:47.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.129 "is_configured": false, 00:11:47.129 "data_offset": 2048, 00:11:47.129 "data_size": 63488 00:11:47.129 }, 00:11:47.129 { 00:11:47.129 "name": null, 00:11:47.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.129 "is_configured": false, 00:11:47.129 "data_offset": 2048, 00:11:47.129 "data_size": 63488 00:11:47.129 } 00:11:47.129 ] 00:11:47.129 }' 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.129 13:28:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.390 [2024-11-18 13:28:17.349997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.390 [2024-11-18 13:28:17.350103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.390 [2024-11-18 13:28:17.350142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:47.390 [2024-11-18 13:28:17.350157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.390 [2024-11-18 13:28:17.350764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.390 [2024-11-18 13:28:17.350796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.390 [2024-11-18 13:28:17.350910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.390 [2024-11-18 13:28:17.350941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.390 pt2 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.390 [2024-11-18 13:28:17.361982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.390 "name": "raid_bdev1", 00:11:47.390 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:47.390 "strip_size_kb": 64, 00:11:47.390 "state": "configuring", 00:11:47.390 "raid_level": "concat", 00:11:47.390 "superblock": true, 00:11:47.390 "num_base_bdevs": 4, 00:11:47.390 "num_base_bdevs_discovered": 1, 00:11:47.390 "num_base_bdevs_operational": 4, 00:11:47.390 "base_bdevs_list": [ 00:11:47.390 { 00:11:47.390 "name": "pt1", 00:11:47.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.390 "is_configured": true, 00:11:47.390 "data_offset": 2048, 00:11:47.390 "data_size": 63488 00:11:47.390 }, 00:11:47.390 { 00:11:47.390 "name": null, 00:11:47.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.390 "is_configured": false, 00:11:47.390 "data_offset": 0, 00:11:47.390 "data_size": 63488 00:11:47.390 }, 00:11:47.390 { 00:11:47.390 "name": null, 00:11:47.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.390 "is_configured": false, 00:11:47.390 "data_offset": 2048, 00:11:47.390 "data_size": 63488 00:11:47.390 }, 00:11:47.390 { 00:11:47.390 "name": null, 00:11:47.390 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.390 "is_configured": false, 00:11:47.390 "data_offset": 2048, 00:11:47.390 "data_size": 63488 00:11:47.390 } 00:11:47.390 ] 00:11:47.390 }' 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.390 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.959 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.959 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.959 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.959 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.959 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 [2024-11-18 13:28:17.777332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.960 [2024-11-18 13:28:17.777418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.960 [2024-11-18 13:28:17.777447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:47.960 [2024-11-18 13:28:17.777459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.960 [2024-11-18 13:28:17.778031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.960 [2024-11-18 13:28:17.778058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.960 [2024-11-18 13:28:17.778187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.960 [2024-11-18 13:28:17.778216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.960 pt2 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 [2024-11-18 13:28:17.789252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.960 [2024-11-18 13:28:17.789312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.960 [2024-11-18 13:28:17.789359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:47.960 [2024-11-18 13:28:17.789372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.960 [2024-11-18 13:28:17.789833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.960 [2024-11-18 13:28:17.789849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.960 [2024-11-18 13:28:17.789927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.960 [2024-11-18 13:28:17.789949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.960 pt3 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 [2024-11-18 13:28:17.801183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:47.960 [2024-11-18 13:28:17.801255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.960 [2024-11-18 13:28:17.801277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.960 [2024-11-18 13:28:17.801287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.960 [2024-11-18 13:28:17.801723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.960 [2024-11-18 13:28:17.801749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:47.960 [2024-11-18 13:28:17.801827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:47.960 [2024-11-18 13:28:17.801848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:47.960 [2024-11-18 13:28:17.801991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.960 [2024-11-18 13:28:17.802007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:47.960 [2024-11-18 13:28:17.802289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:47.960 [2024-11-18 13:28:17.802463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.960 [2024-11-18 13:28:17.802478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.960 [2024-11-18 13:28:17.802631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.960 pt4 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.960 "name": "raid_bdev1", 00:11:47.960 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:47.960 "strip_size_kb": 64, 00:11:47.960 "state": "online", 00:11:47.960 "raid_level": "concat", 00:11:47.960 "superblock": true, 00:11:47.960 "num_base_bdevs": 4, 00:11:47.960 "num_base_bdevs_discovered": 4, 00:11:47.960 "num_base_bdevs_operational": 4, 00:11:47.960 "base_bdevs_list": [ 00:11:47.960 { 00:11:47.960 "name": "pt1", 00:11:47.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.960 "is_configured": true, 00:11:47.960 "data_offset": 2048, 00:11:47.960 "data_size": 63488 00:11:47.960 }, 00:11:47.960 { 00:11:47.960 "name": "pt2", 00:11:47.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.960 "is_configured": true, 00:11:47.960 "data_offset": 2048, 00:11:47.960 "data_size": 63488 00:11:47.960 }, 00:11:47.960 { 00:11:47.960 "name": "pt3", 00:11:47.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.960 "is_configured": true, 00:11:47.960 "data_offset": 2048, 00:11:47.960 "data_size": 63488 00:11:47.960 }, 00:11:47.960 { 00:11:47.960 "name": "pt4", 00:11:47.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:47.960 "is_configured": true, 00:11:47.960 "data_offset": 2048, 00:11:47.960 "data_size": 63488 00:11:47.960 } 00:11:47.960 ] 00:11:47.960 }' 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.960 13:28:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.220 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.220 [2024-11-18 13:28:18.268801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.480 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.480 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.480 "name": "raid_bdev1", 00:11:48.480 "aliases": [ 00:11:48.480 "c35ab4d2-0851-4b2b-95cd-706d2ddaf767" 00:11:48.480 ], 00:11:48.480 "product_name": "Raid Volume", 00:11:48.480 "block_size": 512, 00:11:48.480 "num_blocks": 253952, 00:11:48.480 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:48.480 "assigned_rate_limits": { 00:11:48.480 "rw_ios_per_sec": 0, 00:11:48.480 "rw_mbytes_per_sec": 0, 00:11:48.480 "r_mbytes_per_sec": 0, 00:11:48.481 "w_mbytes_per_sec": 0 00:11:48.481 }, 00:11:48.481 "claimed": false, 00:11:48.481 "zoned": false, 00:11:48.481 "supported_io_types": { 00:11:48.481 "read": true, 00:11:48.481 "write": true, 00:11:48.481 "unmap": true, 00:11:48.481 "flush": true, 00:11:48.481 "reset": true, 00:11:48.481 "nvme_admin": false, 00:11:48.481 "nvme_io": false, 00:11:48.481 "nvme_io_md": false, 00:11:48.481 "write_zeroes": true, 00:11:48.481 "zcopy": false, 00:11:48.481 "get_zone_info": false, 00:11:48.481 "zone_management": false, 00:11:48.481 "zone_append": false, 00:11:48.481 "compare": false, 00:11:48.481 "compare_and_write": false, 00:11:48.481 "abort": false, 00:11:48.481 "seek_hole": false, 00:11:48.481 "seek_data": false, 00:11:48.481 "copy": false, 00:11:48.481 "nvme_iov_md": false 00:11:48.481 }, 00:11:48.481 "memory_domains": [ 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "system", 00:11:48.481 "dma_device_type": 1 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.481 "dma_device_type": 2 00:11:48.481 } 00:11:48.481 ], 00:11:48.481 "driver_specific": { 00:11:48.481 "raid": { 00:11:48.481 "uuid": "c35ab4d2-0851-4b2b-95cd-706d2ddaf767", 00:11:48.481 "strip_size_kb": 64, 00:11:48.481 "state": "online", 00:11:48.481 "raid_level": "concat", 00:11:48.481 "superblock": true, 00:11:48.481 "num_base_bdevs": 4, 00:11:48.481 "num_base_bdevs_discovered": 4, 00:11:48.481 "num_base_bdevs_operational": 4, 00:11:48.481 "base_bdevs_list": [ 00:11:48.481 { 00:11:48.481 "name": "pt1", 00:11:48.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 2048, 00:11:48.481 "data_size": 63488 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "pt2", 00:11:48.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 2048, 00:11:48.481 "data_size": 63488 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "pt3", 00:11:48.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 2048, 00:11:48.481 "data_size": 63488 00:11:48.481 }, 00:11:48.481 { 00:11:48.481 "name": "pt4", 00:11:48.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:48.481 "is_configured": true, 00:11:48.481 "data_offset": 2048, 00:11:48.481 "data_size": 63488 00:11:48.481 } 00:11:48.481 ] 00:11:48.481 } 00:11:48.481 } 00:11:48.481 }' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.481 pt2 00:11:48.481 pt3 00:11:48.481 pt4' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.481 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.741 [2024-11-18 13:28:18.588259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c35ab4d2-0851-4b2b-95cd-706d2ddaf767 '!=' c35ab4d2-0851-4b2b-95cd-706d2ddaf767 ']' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72629 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72629 ']' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72629 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72629 00:11:48.741 killing process with pid 72629 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72629' 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72629 00:11:48.741 [2024-11-18 13:28:18.672906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.741 [2024-11-18 13:28:18.673022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.741 13:28:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72629 00:11:48.741 [2024-11-18 13:28:18.673116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.741 [2024-11-18 13:28:18.673143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.314 [2024-11-18 13:28:19.111698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.694 13:28:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.694 00:11:50.694 real 0m5.679s 00:11:50.694 user 0m7.878s 00:11:50.694 sys 0m1.098s 00:11:50.694 13:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.694 13:28:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 ************************************ 00:11:50.694 END TEST raid_superblock_test 00:11:50.694 ************************************ 00:11:50.694 13:28:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:50.694 13:28:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.694 13:28:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.694 13:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 ************************************ 00:11:50.694 START TEST raid_read_error_test 00:11:50.694 ************************************ 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yFjeMWFSvy 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72893 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72893 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72893 ']' 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.694 13:28:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 [2024-11-18 13:28:20.497590] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:50.694 [2024-11-18 13:28:20.497736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72893 ] 00:11:50.694 [2024-11-18 13:28:20.674334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.954 [2024-11-18 13:28:20.813618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.213 [2024-11-18 13:28:21.053666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.213 [2024-11-18 13:28:21.053746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 BaseBdev1_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 true 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 [2024-11-18 13:28:21.386069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.473 [2024-11-18 13:28:21.386170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.473 [2024-11-18 13:28:21.386195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.473 [2024-11-18 13:28:21.386210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.473 [2024-11-18 13:28:21.388678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.473 [2024-11-18 13:28:21.388726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.473 BaseBdev1 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 BaseBdev2_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 true 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.473 [2024-11-18 13:28:21.458007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.473 [2024-11-18 13:28:21.458076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.473 [2024-11-18 13:28:21.458095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.473 [2024-11-18 13:28:21.458108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.473 [2024-11-18 13:28:21.460384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.473 [2024-11-18 13:28:21.460429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.473 BaseBdev2 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.473 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 BaseBdev3_malloc 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 true 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.733 [2024-11-18 13:28:21.545774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.733 [2024-11-18 13:28:21.545838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.733 [2024-11-18 13:28:21.545875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.733 [2024-11-18 13:28:21.545889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.733 [2024-11-18 13:28:21.548439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.733 [2024-11-18 13:28:21.548486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.733 BaseBdev3 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.733 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.734 BaseBdev4_malloc 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.734 true 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.734 [2024-11-18 13:28:21.620586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:51.734 [2024-11-18 13:28:21.620663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.734 [2024-11-18 13:28:21.620689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:51.734 [2024-11-18 13:28:21.620704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.734 [2024-11-18 13:28:21.623316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.734 [2024-11-18 13:28:21.623385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.734 BaseBdev4 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.734 [2024-11-18 13:28:21.632711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.734 [2024-11-18 13:28:21.635052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.734 [2024-11-18 13:28:21.635175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.734 [2024-11-18 13:28:21.635257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.734 [2024-11-18 13:28:21.635581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:51.734 [2024-11-18 13:28:21.635611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.734 [2024-11-18 13:28:21.635970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:51.734 [2024-11-18 13:28:21.636217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:51.734 [2024-11-18 13:28:21.636238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:51.734 [2024-11-18 13:28:21.636559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.734 "name": "raid_bdev1", 00:11:51.734 "uuid": "41ccd619-d8e2-4eca-8491-ea355b07cb2f", 00:11:51.734 "strip_size_kb": 64, 00:11:51.734 "state": "online", 00:11:51.734 "raid_level": "concat", 00:11:51.734 "superblock": true, 00:11:51.734 "num_base_bdevs": 4, 00:11:51.734 "num_base_bdevs_discovered": 4, 00:11:51.734 "num_base_bdevs_operational": 4, 00:11:51.734 "base_bdevs_list": [ 00:11:51.734 { 00:11:51.734 "name": "BaseBdev1", 00:11:51.734 "uuid": "de2d91b3-7954-581c-b0a1-4879f481f069", 00:11:51.734 "is_configured": true, 00:11:51.734 "data_offset": 2048, 00:11:51.734 "data_size": 63488 00:11:51.734 }, 00:11:51.734 { 00:11:51.734 "name": "BaseBdev2", 00:11:51.734 "uuid": "87212bd6-ebab-510c-bd54-c444edf9d7e4", 00:11:51.734 "is_configured": true, 00:11:51.734 "data_offset": 2048, 00:11:51.734 "data_size": 63488 00:11:51.734 }, 00:11:51.734 { 00:11:51.734 "name": "BaseBdev3", 00:11:51.734 "uuid": "efca90de-fb9a-5d20-979d-669c8638ba59", 00:11:51.734 "is_configured": true, 00:11:51.734 "data_offset": 2048, 00:11:51.734 "data_size": 63488 00:11:51.734 }, 00:11:51.734 { 00:11:51.734 "name": "BaseBdev4", 00:11:51.734 "uuid": "dbc75ad5-8462-560b-8ef3-0db5e94a7c4b", 00:11:51.734 "is_configured": true, 00:11:51.734 "data_offset": 2048, 00:11:51.734 "data_size": 63488 00:11:51.734 } 00:11:51.734 ] 00:11:51.734 }' 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.734 13:28:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.304 13:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.304 13:28:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.304 [2024-11-18 13:28:22.169170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.243 "name": "raid_bdev1", 00:11:53.243 "uuid": "41ccd619-d8e2-4eca-8491-ea355b07cb2f", 00:11:53.243 "strip_size_kb": 64, 00:11:53.243 "state": "online", 00:11:53.243 "raid_level": "concat", 00:11:53.243 "superblock": true, 00:11:53.243 "num_base_bdevs": 4, 00:11:53.243 "num_base_bdevs_discovered": 4, 00:11:53.243 "num_base_bdevs_operational": 4, 00:11:53.243 "base_bdevs_list": [ 00:11:53.243 { 00:11:53.243 "name": "BaseBdev1", 00:11:53.243 "uuid": "de2d91b3-7954-581c-b0a1-4879f481f069", 00:11:53.243 "is_configured": true, 00:11:53.243 "data_offset": 2048, 00:11:53.243 "data_size": 63488 00:11:53.243 }, 00:11:53.243 { 00:11:53.243 "name": "BaseBdev2", 00:11:53.243 "uuid": "87212bd6-ebab-510c-bd54-c444edf9d7e4", 00:11:53.243 "is_configured": true, 00:11:53.243 "data_offset": 2048, 00:11:53.243 "data_size": 63488 00:11:53.243 }, 00:11:53.243 { 00:11:53.243 "name": "BaseBdev3", 00:11:53.243 "uuid": "efca90de-fb9a-5d20-979d-669c8638ba59", 00:11:53.243 "is_configured": true, 00:11:53.243 "data_offset": 2048, 00:11:53.243 "data_size": 63488 00:11:53.243 }, 00:11:53.243 { 00:11:53.243 "name": "BaseBdev4", 00:11:53.243 "uuid": "dbc75ad5-8462-560b-8ef3-0db5e94a7c4b", 00:11:53.243 "is_configured": true, 00:11:53.243 "data_offset": 2048, 00:11:53.243 "data_size": 63488 00:11:53.243 } 00:11:53.243 ] 00:11:53.243 }' 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.243 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.503 [2024-11-18 13:28:23.511181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.503 [2024-11-18 13:28:23.511228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.503 [2024-11-18 13:28:23.514037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.503 [2024-11-18 13:28:23.514116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.503 [2024-11-18 13:28:23.514189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.503 [2024-11-18 13:28:23.514211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:53.503 { 00:11:53.503 "results": [ 00:11:53.503 { 00:11:53.503 "job": "raid_bdev1", 00:11:53.503 "core_mask": "0x1", 00:11:53.503 "workload": "randrw", 00:11:53.503 "percentage": 50, 00:11:53.503 "status": "finished", 00:11:53.503 "queue_depth": 1, 00:11:53.503 "io_size": 131072, 00:11:53.503 "runtime": 1.342516, 00:11:53.503 "iops": 12757.389856061305, 00:11:53.503 "mibps": 1594.6737320076631, 00:11:53.503 "io_failed": 1, 00:11:53.503 "io_timeout": 0, 00:11:53.503 "avg_latency_us": 110.45687421092458, 00:11:53.503 "min_latency_us": 27.388646288209607, 00:11:53.503 "max_latency_us": 1387.989519650655 00:11:53.503 } 00:11:53.503 ], 00:11:53.503 "core_count": 1 00:11:53.503 } 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72893 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72893 ']' 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72893 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.503 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72893 00:11:53.763 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.763 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.763 killing process with pid 72893 00:11:53.763 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72893' 00:11:53.763 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72893 00:11:53.763 [2024-11-18 13:28:23.559484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.763 13:28:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72893 00:11:54.022 [2024-11-18 13:28:23.917191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.406 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yFjeMWFSvy 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:55.407 00:11:55.407 real 0m4.834s 00:11:55.407 user 0m5.477s 00:11:55.407 sys 0m0.743s 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.407 13:28:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.407 ************************************ 00:11:55.407 END TEST raid_read_error_test 00:11:55.407 ************************************ 00:11:55.407 13:28:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:55.407 13:28:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.407 13:28:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.407 13:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.407 ************************************ 00:11:55.407 START TEST raid_write_error_test 00:11:55.407 ************************************ 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1sqrC9KFXx 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73039 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73039 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73039 ']' 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.407 13:28:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.407 [2024-11-18 13:28:25.399501] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:55.407 [2024-11-18 13:28:25.399620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:11:55.667 [2024-11-18 13:28:25.573185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.667 [2024-11-18 13:28:25.714183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.927 [2024-11-18 13:28:25.965456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.927 [2024-11-18 13:28:25.965513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.187 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.446 BaseBdev1_malloc 00:11:56.446 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 true 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 [2024-11-18 13:28:26.296533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.447 [2024-11-18 13:28:26.296653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.447 [2024-11-18 13:28:26.296697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.447 [2024-11-18 13:28:26.296712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.447 [2024-11-18 13:28:26.299233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.447 [2024-11-18 13:28:26.299278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.447 BaseBdev1 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 BaseBdev2_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 true 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 [2024-11-18 13:28:26.369337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.447 [2024-11-18 13:28:26.369405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.447 [2024-11-18 13:28:26.369425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.447 [2024-11-18 13:28:26.369440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.447 [2024-11-18 13:28:26.372427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.447 [2024-11-18 13:28:26.372478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.447 BaseBdev2 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 BaseBdev3_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 true 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.447 [2024-11-18 13:28:26.454456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.447 [2024-11-18 13:28:26.454594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.447 [2024-11-18 13:28:26.454640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:56.447 [2024-11-18 13:28:26.454654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.447 [2024-11-18 13:28:26.457166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.447 [2024-11-18 13:28:26.457211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.447 BaseBdev3 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.447 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.706 BaseBdev4_malloc 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.707 true 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.707 [2024-11-18 13:28:26.529294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:56.707 [2024-11-18 13:28:26.529367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.707 [2024-11-18 13:28:26.529389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.707 [2024-11-18 13:28:26.529402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.707 [2024-11-18 13:28:26.531939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.707 [2024-11-18 13:28:26.531987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:56.707 BaseBdev4 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.707 [2024-11-18 13:28:26.541347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.707 [2024-11-18 13:28:26.543455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.707 [2024-11-18 13:28:26.543544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.707 [2024-11-18 13:28:26.543619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.707 [2024-11-18 13:28:26.543864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:56.707 [2024-11-18 13:28:26.543879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:56.707 [2024-11-18 13:28:26.544185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:56.707 [2024-11-18 13:28:26.544394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:56.707 [2024-11-18 13:28:26.544406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:56.707 [2024-11-18 13:28:26.544596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.707 "name": "raid_bdev1", 00:11:56.707 "uuid": "7316646f-c486-4f94-91d6-39f10ad05d19", 00:11:56.707 "strip_size_kb": 64, 00:11:56.707 "state": "online", 00:11:56.707 "raid_level": "concat", 00:11:56.707 "superblock": true, 00:11:56.707 "num_base_bdevs": 4, 00:11:56.707 "num_base_bdevs_discovered": 4, 00:11:56.707 "num_base_bdevs_operational": 4, 00:11:56.707 "base_bdevs_list": [ 00:11:56.707 { 00:11:56.707 "name": "BaseBdev1", 00:11:56.707 "uuid": "1620bd7c-3225-5d49-a2e1-d7b3aade093f", 00:11:56.707 "is_configured": true, 00:11:56.707 "data_offset": 2048, 00:11:56.707 "data_size": 63488 00:11:56.707 }, 00:11:56.707 { 00:11:56.707 "name": "BaseBdev2", 00:11:56.707 "uuid": "0d6d985d-a09e-5efb-8bfc-92dd7a4184a8", 00:11:56.707 "is_configured": true, 00:11:56.707 "data_offset": 2048, 00:11:56.707 "data_size": 63488 00:11:56.707 }, 00:11:56.707 { 00:11:56.707 "name": "BaseBdev3", 00:11:56.707 "uuid": "b8fd8d05-7a21-5437-92e5-37558520479e", 00:11:56.707 "is_configured": true, 00:11:56.707 "data_offset": 2048, 00:11:56.707 "data_size": 63488 00:11:56.707 }, 00:11:56.707 { 00:11:56.707 "name": "BaseBdev4", 00:11:56.707 "uuid": "da1acb4f-293d-5660-9a4b-90569b316818", 00:11:56.707 "is_configured": true, 00:11:56.707 "data_offset": 2048, 00:11:56.707 "data_size": 63488 00:11:56.707 } 00:11:56.707 ] 00:11:56.707 }' 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.707 13:28:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.966 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:56.966 13:28:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:57.225 [2024-11-18 13:28:27.065854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.165 13:28:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.165 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.165 13:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.165 "name": "raid_bdev1", 00:11:58.165 "uuid": "7316646f-c486-4f94-91d6-39f10ad05d19", 00:11:58.165 "strip_size_kb": 64, 00:11:58.165 "state": "online", 00:11:58.165 "raid_level": "concat", 00:11:58.165 "superblock": true, 00:11:58.165 "num_base_bdevs": 4, 00:11:58.165 "num_base_bdevs_discovered": 4, 00:11:58.165 "num_base_bdevs_operational": 4, 00:11:58.165 "base_bdevs_list": [ 00:11:58.165 { 00:11:58.165 "name": "BaseBdev1", 00:11:58.165 "uuid": "1620bd7c-3225-5d49-a2e1-d7b3aade093f", 00:11:58.165 "is_configured": true, 00:11:58.165 "data_offset": 2048, 00:11:58.165 "data_size": 63488 00:11:58.165 }, 00:11:58.165 { 00:11:58.165 "name": "BaseBdev2", 00:11:58.165 "uuid": "0d6d985d-a09e-5efb-8bfc-92dd7a4184a8", 00:11:58.165 "is_configured": true, 00:11:58.165 "data_offset": 2048, 00:11:58.165 "data_size": 63488 00:11:58.165 }, 00:11:58.165 { 00:11:58.165 "name": "BaseBdev3", 00:11:58.165 "uuid": "b8fd8d05-7a21-5437-92e5-37558520479e", 00:11:58.165 "is_configured": true, 00:11:58.165 "data_offset": 2048, 00:11:58.165 "data_size": 63488 00:11:58.165 }, 00:11:58.165 { 00:11:58.165 "name": "BaseBdev4", 00:11:58.165 "uuid": "da1acb4f-293d-5660-9a4b-90569b316818", 00:11:58.165 "is_configured": true, 00:11:58.165 "data_offset": 2048, 00:11:58.165 "data_size": 63488 00:11:58.165 } 00:11:58.165 ] 00:11:58.165 }' 00:11:58.165 13:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.165 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.425 [2024-11-18 13:28:28.432294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.425 [2024-11-18 13:28:28.432438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.425 [2024-11-18 13:28:28.435673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.425 [2024-11-18 13:28:28.435883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.425 [2024-11-18 13:28:28.436022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.425 [2024-11-18 13:28:28.436147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:58.425 { 00:11:58.425 "results": [ 00:11:58.425 { 00:11:58.425 "job": "raid_bdev1", 00:11:58.425 "core_mask": "0x1", 00:11:58.425 "workload": "randrw", 00:11:58.425 "percentage": 50, 00:11:58.425 "status": "finished", 00:11:58.425 "queue_depth": 1, 00:11:58.425 "io_size": 131072, 00:11:58.425 "runtime": 1.367018, 00:11:58.425 "iops": 12594.567152736832, 00:11:58.425 "mibps": 1574.320894092104, 00:11:58.425 "io_failed": 1, 00:11:58.425 "io_timeout": 0, 00:11:58.425 "avg_latency_us": 111.79956712306254, 00:11:58.425 "min_latency_us": 27.612227074235808, 00:11:58.425 "max_latency_us": 1430.9170305676855 00:11:58.425 } 00:11:58.425 ], 00:11:58.425 "core_count": 1 00:11:58.425 } 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73039 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73039 ']' 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73039 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.425 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73039 00:11:58.684 killing process with pid 73039 00:11:58.684 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.684 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.684 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73039' 00:11:58.684 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73039 00:11:58.684 [2024-11-18 13:28:28.483209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.684 13:28:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73039 00:11:58.943 [2024-11-18 13:28:28.833475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1sqrC9KFXx 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:00.324 00:12:00.324 real 0m4.838s 00:12:00.324 user 0m5.503s 00:12:00.324 sys 0m0.732s 00:12:00.324 ************************************ 00:12:00.324 END TEST raid_write_error_test 00:12:00.324 ************************************ 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.324 13:28:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.324 13:28:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:00.324 13:28:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:00.324 13:28:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.324 13:28:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.324 13:28:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.324 ************************************ 00:12:00.324 START TEST raid_state_function_test 00:12:00.324 ************************************ 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:00.324 Process raid pid: 73188 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73188 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73188' 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73188 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73188 ']' 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.324 13:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.324 [2024-11-18 13:28:30.306716] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:00.325 [2024-11-18 13:28:30.306946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.584 [2024-11-18 13:28:30.487782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.844 [2024-11-18 13:28:30.641711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.844 [2024-11-18 13:28:30.880828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.844 [2024-11-18 13:28:30.881005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.104 [2024-11-18 13:28:31.126987] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.104 [2024-11-18 13:28:31.127140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.104 [2024-11-18 13:28:31.127182] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.104 [2024-11-18 13:28:31.127214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.104 [2024-11-18 13:28:31.127255] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.104 [2024-11-18 13:28:31.127297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.104 [2024-11-18 13:28:31.127326] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.104 [2024-11-18 13:28:31.127372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.104 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.365 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.365 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.365 "name": "Existed_Raid", 00:12:01.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.365 "strip_size_kb": 0, 00:12:01.365 "state": "configuring", 00:12:01.365 "raid_level": "raid1", 00:12:01.365 "superblock": false, 00:12:01.365 "num_base_bdevs": 4, 00:12:01.365 "num_base_bdevs_discovered": 0, 00:12:01.365 "num_base_bdevs_operational": 4, 00:12:01.365 "base_bdevs_list": [ 00:12:01.365 { 00:12:01.365 "name": "BaseBdev1", 00:12:01.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.365 "is_configured": false, 00:12:01.365 "data_offset": 0, 00:12:01.365 "data_size": 0 00:12:01.365 }, 00:12:01.365 { 00:12:01.365 "name": "BaseBdev2", 00:12:01.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.365 "is_configured": false, 00:12:01.365 "data_offset": 0, 00:12:01.365 "data_size": 0 00:12:01.365 }, 00:12:01.365 { 00:12:01.365 "name": "BaseBdev3", 00:12:01.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.365 "is_configured": false, 00:12:01.365 "data_offset": 0, 00:12:01.365 "data_size": 0 00:12:01.365 }, 00:12:01.365 { 00:12:01.365 "name": "BaseBdev4", 00:12:01.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.365 "is_configured": false, 00:12:01.365 "data_offset": 0, 00:12:01.365 "data_size": 0 00:12:01.365 } 00:12:01.365 ] 00:12:01.365 }' 00:12:01.365 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.365 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.625 [2024-11-18 13:28:31.590263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.625 [2024-11-18 13:28:31.590380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.625 [2024-11-18 13:28:31.602205] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.625 [2024-11-18 13:28:31.602302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.625 [2024-11-18 13:28:31.602336] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.625 [2024-11-18 13:28:31.602365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.625 [2024-11-18 13:28:31.602388] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.625 [2024-11-18 13:28:31.602431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.625 [2024-11-18 13:28:31.602463] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.625 [2024-11-18 13:28:31.602493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.625 [2024-11-18 13:28:31.658738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.625 BaseBdev1 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.625 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.885 [ 00:12:01.885 { 00:12:01.885 "name": "BaseBdev1", 00:12:01.885 "aliases": [ 00:12:01.885 "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07" 00:12:01.885 ], 00:12:01.885 "product_name": "Malloc disk", 00:12:01.885 "block_size": 512, 00:12:01.885 "num_blocks": 65536, 00:12:01.885 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:01.885 "assigned_rate_limits": { 00:12:01.885 "rw_ios_per_sec": 0, 00:12:01.886 "rw_mbytes_per_sec": 0, 00:12:01.886 "r_mbytes_per_sec": 0, 00:12:01.886 "w_mbytes_per_sec": 0 00:12:01.886 }, 00:12:01.886 "claimed": true, 00:12:01.886 "claim_type": "exclusive_write", 00:12:01.886 "zoned": false, 00:12:01.886 "supported_io_types": { 00:12:01.886 "read": true, 00:12:01.886 "write": true, 00:12:01.886 "unmap": true, 00:12:01.886 "flush": true, 00:12:01.886 "reset": true, 00:12:01.886 "nvme_admin": false, 00:12:01.886 "nvme_io": false, 00:12:01.886 "nvme_io_md": false, 00:12:01.886 "write_zeroes": true, 00:12:01.886 "zcopy": true, 00:12:01.886 "get_zone_info": false, 00:12:01.886 "zone_management": false, 00:12:01.886 "zone_append": false, 00:12:01.886 "compare": false, 00:12:01.886 "compare_and_write": false, 00:12:01.886 "abort": true, 00:12:01.886 "seek_hole": false, 00:12:01.886 "seek_data": false, 00:12:01.886 "copy": true, 00:12:01.886 "nvme_iov_md": false 00:12:01.886 }, 00:12:01.886 "memory_domains": [ 00:12:01.886 { 00:12:01.886 "dma_device_id": "system", 00:12:01.886 "dma_device_type": 1 00:12:01.886 }, 00:12:01.886 { 00:12:01.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.886 "dma_device_type": 2 00:12:01.886 } 00:12:01.886 ], 00:12:01.886 "driver_specific": {} 00:12:01.886 } 00:12:01.886 ] 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.886 "name": "Existed_Raid", 00:12:01.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.886 "strip_size_kb": 0, 00:12:01.886 "state": "configuring", 00:12:01.886 "raid_level": "raid1", 00:12:01.886 "superblock": false, 00:12:01.886 "num_base_bdevs": 4, 00:12:01.886 "num_base_bdevs_discovered": 1, 00:12:01.886 "num_base_bdevs_operational": 4, 00:12:01.886 "base_bdevs_list": [ 00:12:01.886 { 00:12:01.886 "name": "BaseBdev1", 00:12:01.886 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:01.886 "is_configured": true, 00:12:01.886 "data_offset": 0, 00:12:01.886 "data_size": 65536 00:12:01.886 }, 00:12:01.886 { 00:12:01.886 "name": "BaseBdev2", 00:12:01.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.886 "is_configured": false, 00:12:01.886 "data_offset": 0, 00:12:01.886 "data_size": 0 00:12:01.886 }, 00:12:01.886 { 00:12:01.886 "name": "BaseBdev3", 00:12:01.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.886 "is_configured": false, 00:12:01.886 "data_offset": 0, 00:12:01.886 "data_size": 0 00:12:01.886 }, 00:12:01.886 { 00:12:01.886 "name": "BaseBdev4", 00:12:01.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.886 "is_configured": false, 00:12:01.886 "data_offset": 0, 00:12:01.886 "data_size": 0 00:12:01.886 } 00:12:01.886 ] 00:12:01.886 }' 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.886 13:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.146 [2024-11-18 13:28:32.137894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.146 [2024-11-18 13:28:32.138046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.146 [2024-11-18 13:28:32.149918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.146 [2024-11-18 13:28:32.152389] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.146 [2024-11-18 13:28:32.152500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.146 [2024-11-18 13:28:32.152537] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.146 [2024-11-18 13:28:32.152554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.146 [2024-11-18 13:28:32.152564] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.146 [2024-11-18 13:28:32.152577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.146 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.406 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.406 "name": "Existed_Raid", 00:12:02.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.406 "strip_size_kb": 0, 00:12:02.406 "state": "configuring", 00:12:02.406 "raid_level": "raid1", 00:12:02.406 "superblock": false, 00:12:02.406 "num_base_bdevs": 4, 00:12:02.406 "num_base_bdevs_discovered": 1, 00:12:02.406 "num_base_bdevs_operational": 4, 00:12:02.406 "base_bdevs_list": [ 00:12:02.406 { 00:12:02.406 "name": "BaseBdev1", 00:12:02.406 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:02.406 "is_configured": true, 00:12:02.406 "data_offset": 0, 00:12:02.406 "data_size": 65536 00:12:02.406 }, 00:12:02.406 { 00:12:02.406 "name": "BaseBdev2", 00:12:02.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.406 "is_configured": false, 00:12:02.406 "data_offset": 0, 00:12:02.406 "data_size": 0 00:12:02.406 }, 00:12:02.406 { 00:12:02.406 "name": "BaseBdev3", 00:12:02.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.406 "is_configured": false, 00:12:02.406 "data_offset": 0, 00:12:02.406 "data_size": 0 00:12:02.406 }, 00:12:02.406 { 00:12:02.406 "name": "BaseBdev4", 00:12:02.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.406 "is_configured": false, 00:12:02.406 "data_offset": 0, 00:12:02.406 "data_size": 0 00:12:02.406 } 00:12:02.406 ] 00:12:02.406 }' 00:12:02.406 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.406 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 [2024-11-18 13:28:32.692746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.666 BaseBdev2 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.926 [ 00:12:02.926 { 00:12:02.926 "name": "BaseBdev2", 00:12:02.926 "aliases": [ 00:12:02.926 "db02008f-dc48-4f01-9515-784708b63052" 00:12:02.926 ], 00:12:02.926 "product_name": "Malloc disk", 00:12:02.926 "block_size": 512, 00:12:02.926 "num_blocks": 65536, 00:12:02.926 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:02.926 "assigned_rate_limits": { 00:12:02.926 "rw_ios_per_sec": 0, 00:12:02.926 "rw_mbytes_per_sec": 0, 00:12:02.926 "r_mbytes_per_sec": 0, 00:12:02.926 "w_mbytes_per_sec": 0 00:12:02.926 }, 00:12:02.926 "claimed": true, 00:12:02.926 "claim_type": "exclusive_write", 00:12:02.926 "zoned": false, 00:12:02.926 "supported_io_types": { 00:12:02.926 "read": true, 00:12:02.926 "write": true, 00:12:02.926 "unmap": true, 00:12:02.926 "flush": true, 00:12:02.926 "reset": true, 00:12:02.926 "nvme_admin": false, 00:12:02.926 "nvme_io": false, 00:12:02.926 "nvme_io_md": false, 00:12:02.926 "write_zeroes": true, 00:12:02.926 "zcopy": true, 00:12:02.926 "get_zone_info": false, 00:12:02.926 "zone_management": false, 00:12:02.926 "zone_append": false, 00:12:02.926 "compare": false, 00:12:02.926 "compare_and_write": false, 00:12:02.926 "abort": true, 00:12:02.926 "seek_hole": false, 00:12:02.926 "seek_data": false, 00:12:02.926 "copy": true, 00:12:02.926 "nvme_iov_md": false 00:12:02.926 }, 00:12:02.926 "memory_domains": [ 00:12:02.926 { 00:12:02.926 "dma_device_id": "system", 00:12:02.926 "dma_device_type": 1 00:12:02.926 }, 00:12:02.926 { 00:12:02.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.926 "dma_device_type": 2 00:12:02.926 } 00:12:02.926 ], 00:12:02.926 "driver_specific": {} 00:12:02.926 } 00:12:02.926 ] 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.926 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.927 "name": "Existed_Raid", 00:12:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.927 "strip_size_kb": 0, 00:12:02.927 "state": "configuring", 00:12:02.927 "raid_level": "raid1", 00:12:02.927 "superblock": false, 00:12:02.927 "num_base_bdevs": 4, 00:12:02.927 "num_base_bdevs_discovered": 2, 00:12:02.927 "num_base_bdevs_operational": 4, 00:12:02.927 "base_bdevs_list": [ 00:12:02.927 { 00:12:02.927 "name": "BaseBdev1", 00:12:02.927 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:02.927 "is_configured": true, 00:12:02.927 "data_offset": 0, 00:12:02.927 "data_size": 65536 00:12:02.927 }, 00:12:02.927 { 00:12:02.927 "name": "BaseBdev2", 00:12:02.927 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:02.927 "is_configured": true, 00:12:02.927 "data_offset": 0, 00:12:02.927 "data_size": 65536 00:12:02.927 }, 00:12:02.927 { 00:12:02.927 "name": "BaseBdev3", 00:12:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.927 "is_configured": false, 00:12:02.927 "data_offset": 0, 00:12:02.927 "data_size": 0 00:12:02.927 }, 00:12:02.927 { 00:12:02.927 "name": "BaseBdev4", 00:12:02.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.927 "is_configured": false, 00:12:02.927 "data_offset": 0, 00:12:02.927 "data_size": 0 00:12:02.927 } 00:12:02.927 ] 00:12:02.927 }' 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.927 13:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.187 [2024-11-18 13:28:33.195222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.187 BaseBdev3 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.187 [ 00:12:03.187 { 00:12:03.187 "name": "BaseBdev3", 00:12:03.187 "aliases": [ 00:12:03.187 "c65553c1-d2cb-4761-a6ed-035aeacb6e28" 00:12:03.187 ], 00:12:03.187 "product_name": "Malloc disk", 00:12:03.187 "block_size": 512, 00:12:03.187 "num_blocks": 65536, 00:12:03.187 "uuid": "c65553c1-d2cb-4761-a6ed-035aeacb6e28", 00:12:03.187 "assigned_rate_limits": { 00:12:03.187 "rw_ios_per_sec": 0, 00:12:03.187 "rw_mbytes_per_sec": 0, 00:12:03.187 "r_mbytes_per_sec": 0, 00:12:03.187 "w_mbytes_per_sec": 0 00:12:03.187 }, 00:12:03.187 "claimed": true, 00:12:03.187 "claim_type": "exclusive_write", 00:12:03.187 "zoned": false, 00:12:03.187 "supported_io_types": { 00:12:03.187 "read": true, 00:12:03.187 "write": true, 00:12:03.187 "unmap": true, 00:12:03.187 "flush": true, 00:12:03.187 "reset": true, 00:12:03.187 "nvme_admin": false, 00:12:03.187 "nvme_io": false, 00:12:03.187 "nvme_io_md": false, 00:12:03.187 "write_zeroes": true, 00:12:03.187 "zcopy": true, 00:12:03.187 "get_zone_info": false, 00:12:03.187 "zone_management": false, 00:12:03.187 "zone_append": false, 00:12:03.187 "compare": false, 00:12:03.187 "compare_and_write": false, 00:12:03.187 "abort": true, 00:12:03.187 "seek_hole": false, 00:12:03.187 "seek_data": false, 00:12:03.187 "copy": true, 00:12:03.187 "nvme_iov_md": false 00:12:03.187 }, 00:12:03.187 "memory_domains": [ 00:12:03.187 { 00:12:03.187 "dma_device_id": "system", 00:12:03.187 "dma_device_type": 1 00:12:03.187 }, 00:12:03.187 { 00:12:03.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.187 "dma_device_type": 2 00:12:03.187 } 00:12:03.187 ], 00:12:03.187 "driver_specific": {} 00:12:03.187 } 00:12:03.187 ] 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.187 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.447 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.447 "name": "Existed_Raid", 00:12:03.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.447 "strip_size_kb": 0, 00:12:03.447 "state": "configuring", 00:12:03.447 "raid_level": "raid1", 00:12:03.447 "superblock": false, 00:12:03.447 "num_base_bdevs": 4, 00:12:03.447 "num_base_bdevs_discovered": 3, 00:12:03.447 "num_base_bdevs_operational": 4, 00:12:03.447 "base_bdevs_list": [ 00:12:03.447 { 00:12:03.447 "name": "BaseBdev1", 00:12:03.447 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:03.447 "is_configured": true, 00:12:03.447 "data_offset": 0, 00:12:03.448 "data_size": 65536 00:12:03.448 }, 00:12:03.448 { 00:12:03.448 "name": "BaseBdev2", 00:12:03.448 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:03.448 "is_configured": true, 00:12:03.448 "data_offset": 0, 00:12:03.448 "data_size": 65536 00:12:03.448 }, 00:12:03.448 { 00:12:03.448 "name": "BaseBdev3", 00:12:03.448 "uuid": "c65553c1-d2cb-4761-a6ed-035aeacb6e28", 00:12:03.448 "is_configured": true, 00:12:03.448 "data_offset": 0, 00:12:03.448 "data_size": 65536 00:12:03.448 }, 00:12:03.448 { 00:12:03.448 "name": "BaseBdev4", 00:12:03.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.448 "is_configured": false, 00:12:03.448 "data_offset": 0, 00:12:03.448 "data_size": 0 00:12:03.448 } 00:12:03.448 ] 00:12:03.448 }' 00:12:03.448 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.448 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.707 [2024-11-18 13:28:33.747078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.707 [2024-11-18 13:28:33.747345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.707 [2024-11-18 13:28:33.747363] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.707 [2024-11-18 13:28:33.747753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.707 [2024-11-18 13:28:33.747984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.707 [2024-11-18 13:28:33.748003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.707 [2024-11-18 13:28:33.748379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.707 BaseBdev4 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.707 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.967 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.967 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.967 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.967 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.967 [ 00:12:03.967 { 00:12:03.967 "name": "BaseBdev4", 00:12:03.967 "aliases": [ 00:12:03.967 "13420078-2df7-4ba4-a001-49259874055e" 00:12:03.967 ], 00:12:03.967 "product_name": "Malloc disk", 00:12:03.967 "block_size": 512, 00:12:03.967 "num_blocks": 65536, 00:12:03.967 "uuid": "13420078-2df7-4ba4-a001-49259874055e", 00:12:03.967 "assigned_rate_limits": { 00:12:03.967 "rw_ios_per_sec": 0, 00:12:03.967 "rw_mbytes_per_sec": 0, 00:12:03.967 "r_mbytes_per_sec": 0, 00:12:03.967 "w_mbytes_per_sec": 0 00:12:03.967 }, 00:12:03.967 "claimed": true, 00:12:03.967 "claim_type": "exclusive_write", 00:12:03.967 "zoned": false, 00:12:03.967 "supported_io_types": { 00:12:03.967 "read": true, 00:12:03.967 "write": true, 00:12:03.967 "unmap": true, 00:12:03.968 "flush": true, 00:12:03.968 "reset": true, 00:12:03.968 "nvme_admin": false, 00:12:03.968 "nvme_io": false, 00:12:03.968 "nvme_io_md": false, 00:12:03.968 "write_zeroes": true, 00:12:03.968 "zcopy": true, 00:12:03.968 "get_zone_info": false, 00:12:03.968 "zone_management": false, 00:12:03.968 "zone_append": false, 00:12:03.968 "compare": false, 00:12:03.968 "compare_and_write": false, 00:12:03.968 "abort": true, 00:12:03.968 "seek_hole": false, 00:12:03.968 "seek_data": false, 00:12:03.968 "copy": true, 00:12:03.968 "nvme_iov_md": false 00:12:03.968 }, 00:12:03.968 "memory_domains": [ 00:12:03.968 { 00:12:03.968 "dma_device_id": "system", 00:12:03.968 "dma_device_type": 1 00:12:03.968 }, 00:12:03.968 { 00:12:03.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.968 "dma_device_type": 2 00:12:03.968 } 00:12:03.968 ], 00:12:03.968 "driver_specific": {} 00:12:03.968 } 00:12:03.968 ] 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.968 "name": "Existed_Raid", 00:12:03.968 "uuid": "7f0cc402-6215-407e-aca7-e0a20346a574", 00:12:03.968 "strip_size_kb": 0, 00:12:03.968 "state": "online", 00:12:03.968 "raid_level": "raid1", 00:12:03.968 "superblock": false, 00:12:03.968 "num_base_bdevs": 4, 00:12:03.968 "num_base_bdevs_discovered": 4, 00:12:03.968 "num_base_bdevs_operational": 4, 00:12:03.968 "base_bdevs_list": [ 00:12:03.968 { 00:12:03.968 "name": "BaseBdev1", 00:12:03.968 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:03.968 "is_configured": true, 00:12:03.968 "data_offset": 0, 00:12:03.968 "data_size": 65536 00:12:03.968 }, 00:12:03.968 { 00:12:03.968 "name": "BaseBdev2", 00:12:03.968 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:03.968 "is_configured": true, 00:12:03.968 "data_offset": 0, 00:12:03.968 "data_size": 65536 00:12:03.968 }, 00:12:03.968 { 00:12:03.968 "name": "BaseBdev3", 00:12:03.968 "uuid": "c65553c1-d2cb-4761-a6ed-035aeacb6e28", 00:12:03.968 "is_configured": true, 00:12:03.968 "data_offset": 0, 00:12:03.968 "data_size": 65536 00:12:03.968 }, 00:12:03.968 { 00:12:03.968 "name": "BaseBdev4", 00:12:03.968 "uuid": "13420078-2df7-4ba4-a001-49259874055e", 00:12:03.968 "is_configured": true, 00:12:03.968 "data_offset": 0, 00:12:03.968 "data_size": 65536 00:12:03.968 } 00:12:03.968 ] 00:12:03.968 }' 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.968 13:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.227 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.227 [2024-11-18 13:28:34.258706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.488 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.488 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:04.488 "name": "Existed_Raid", 00:12:04.488 "aliases": [ 00:12:04.488 "7f0cc402-6215-407e-aca7-e0a20346a574" 00:12:04.488 ], 00:12:04.488 "product_name": "Raid Volume", 00:12:04.488 "block_size": 512, 00:12:04.488 "num_blocks": 65536, 00:12:04.488 "uuid": "7f0cc402-6215-407e-aca7-e0a20346a574", 00:12:04.488 "assigned_rate_limits": { 00:12:04.488 "rw_ios_per_sec": 0, 00:12:04.488 "rw_mbytes_per_sec": 0, 00:12:04.488 "r_mbytes_per_sec": 0, 00:12:04.488 "w_mbytes_per_sec": 0 00:12:04.488 }, 00:12:04.488 "claimed": false, 00:12:04.488 "zoned": false, 00:12:04.488 "supported_io_types": { 00:12:04.488 "read": true, 00:12:04.488 "write": true, 00:12:04.488 "unmap": false, 00:12:04.488 "flush": false, 00:12:04.488 "reset": true, 00:12:04.488 "nvme_admin": false, 00:12:04.488 "nvme_io": false, 00:12:04.488 "nvme_io_md": false, 00:12:04.488 "write_zeroes": true, 00:12:04.488 "zcopy": false, 00:12:04.488 "get_zone_info": false, 00:12:04.488 "zone_management": false, 00:12:04.488 "zone_append": false, 00:12:04.488 "compare": false, 00:12:04.488 "compare_and_write": false, 00:12:04.488 "abort": false, 00:12:04.488 "seek_hole": false, 00:12:04.488 "seek_data": false, 00:12:04.488 "copy": false, 00:12:04.488 "nvme_iov_md": false 00:12:04.488 }, 00:12:04.488 "memory_domains": [ 00:12:04.488 { 00:12:04.488 "dma_device_id": "system", 00:12:04.488 "dma_device_type": 1 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.488 "dma_device_type": 2 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "system", 00:12:04.488 "dma_device_type": 1 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.488 "dma_device_type": 2 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "system", 00:12:04.488 "dma_device_type": 1 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.488 "dma_device_type": 2 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "system", 00:12:04.488 "dma_device_type": 1 00:12:04.488 }, 00:12:04.488 { 00:12:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.488 "dma_device_type": 2 00:12:04.488 } 00:12:04.488 ], 00:12:04.488 "driver_specific": { 00:12:04.488 "raid": { 00:12:04.488 "uuid": "7f0cc402-6215-407e-aca7-e0a20346a574", 00:12:04.488 "strip_size_kb": 0, 00:12:04.488 "state": "online", 00:12:04.488 "raid_level": "raid1", 00:12:04.488 "superblock": false, 00:12:04.488 "num_base_bdevs": 4, 00:12:04.489 "num_base_bdevs_discovered": 4, 00:12:04.489 "num_base_bdevs_operational": 4, 00:12:04.489 "base_bdevs_list": [ 00:12:04.489 { 00:12:04.489 "name": "BaseBdev1", 00:12:04.489 "uuid": "42cd27e2-ac81-4ed3-8773-9efd4f3d1f07", 00:12:04.489 "is_configured": true, 00:12:04.489 "data_offset": 0, 00:12:04.489 "data_size": 65536 00:12:04.489 }, 00:12:04.489 { 00:12:04.489 "name": "BaseBdev2", 00:12:04.489 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:04.489 "is_configured": true, 00:12:04.489 "data_offset": 0, 00:12:04.489 "data_size": 65536 00:12:04.489 }, 00:12:04.489 { 00:12:04.489 "name": "BaseBdev3", 00:12:04.489 "uuid": "c65553c1-d2cb-4761-a6ed-035aeacb6e28", 00:12:04.489 "is_configured": true, 00:12:04.489 "data_offset": 0, 00:12:04.489 "data_size": 65536 00:12:04.489 }, 00:12:04.489 { 00:12:04.489 "name": "BaseBdev4", 00:12:04.489 "uuid": "13420078-2df7-4ba4-a001-49259874055e", 00:12:04.489 "is_configured": true, 00:12:04.489 "data_offset": 0, 00:12:04.489 "data_size": 65536 00:12:04.489 } 00:12:04.489 ] 00:12:04.489 } 00:12:04.489 } 00:12:04.489 }' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:04.489 BaseBdev2 00:12:04.489 BaseBdev3 00:12:04.489 BaseBdev4' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.489 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.749 [2024-11-18 13:28:34.589863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.749 "name": "Existed_Raid", 00:12:04.749 "uuid": "7f0cc402-6215-407e-aca7-e0a20346a574", 00:12:04.749 "strip_size_kb": 0, 00:12:04.749 "state": "online", 00:12:04.749 "raid_level": "raid1", 00:12:04.749 "superblock": false, 00:12:04.749 "num_base_bdevs": 4, 00:12:04.749 "num_base_bdevs_discovered": 3, 00:12:04.749 "num_base_bdevs_operational": 3, 00:12:04.749 "base_bdevs_list": [ 00:12:04.749 { 00:12:04.749 "name": null, 00:12:04.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.749 "is_configured": false, 00:12:04.749 "data_offset": 0, 00:12:04.749 "data_size": 65536 00:12:04.749 }, 00:12:04.749 { 00:12:04.749 "name": "BaseBdev2", 00:12:04.749 "uuid": "db02008f-dc48-4f01-9515-784708b63052", 00:12:04.749 "is_configured": true, 00:12:04.749 "data_offset": 0, 00:12:04.749 "data_size": 65536 00:12:04.749 }, 00:12:04.749 { 00:12:04.749 "name": "BaseBdev3", 00:12:04.749 "uuid": "c65553c1-d2cb-4761-a6ed-035aeacb6e28", 00:12:04.749 "is_configured": true, 00:12:04.749 "data_offset": 0, 00:12:04.749 "data_size": 65536 00:12:04.749 }, 00:12:04.749 { 00:12:04.749 "name": "BaseBdev4", 00:12:04.749 "uuid": "13420078-2df7-4ba4-a001-49259874055e", 00:12:04.749 "is_configured": true, 00:12:04.749 "data_offset": 0, 00:12:04.749 "data_size": 65536 00:12:04.749 } 00:12:04.749 ] 00:12:04.749 }' 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.749 13:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 [2024-11-18 13:28:35.179947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.318 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.318 [2024-11-18 13:28:35.339840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.578 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.579 [2024-11-18 13:28:35.506792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:05.579 [2024-11-18 13:28:35.507020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.579 [2024-11-18 13:28:35.614508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.579 [2024-11-18 13:28:35.614710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.579 [2024-11-18 13:28:35.614770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.579 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 BaseBdev2 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.838 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.838 [ 00:12:05.838 { 00:12:05.838 "name": "BaseBdev2", 00:12:05.838 "aliases": [ 00:12:05.838 "7fe47a01-6243-49d7-9254-63c24e824301" 00:12:05.838 ], 00:12:05.838 "product_name": "Malloc disk", 00:12:05.838 "block_size": 512, 00:12:05.838 "num_blocks": 65536, 00:12:05.838 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:05.838 "assigned_rate_limits": { 00:12:05.838 "rw_ios_per_sec": 0, 00:12:05.838 "rw_mbytes_per_sec": 0, 00:12:05.838 "r_mbytes_per_sec": 0, 00:12:05.838 "w_mbytes_per_sec": 0 00:12:05.838 }, 00:12:05.838 "claimed": false, 00:12:05.838 "zoned": false, 00:12:05.838 "supported_io_types": { 00:12:05.838 "read": true, 00:12:05.838 "write": true, 00:12:05.838 "unmap": true, 00:12:05.838 "flush": true, 00:12:05.838 "reset": true, 00:12:05.838 "nvme_admin": false, 00:12:05.838 "nvme_io": false, 00:12:05.838 "nvme_io_md": false, 00:12:05.838 "write_zeroes": true, 00:12:05.838 "zcopy": true, 00:12:05.838 "get_zone_info": false, 00:12:05.838 "zone_management": false, 00:12:05.838 "zone_append": false, 00:12:05.838 "compare": false, 00:12:05.838 "compare_and_write": false, 00:12:05.838 "abort": true, 00:12:05.838 "seek_hole": false, 00:12:05.838 "seek_data": false, 00:12:05.838 "copy": true, 00:12:05.838 "nvme_iov_md": false 00:12:05.838 }, 00:12:05.838 "memory_domains": [ 00:12:05.838 { 00:12:05.838 "dma_device_id": "system", 00:12:05.838 "dma_device_type": 1 00:12:05.838 }, 00:12:05.838 { 00:12:05.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.839 "dma_device_type": 2 00:12:05.839 } 00:12:05.839 ], 00:12:05.839 "driver_specific": {} 00:12:05.839 } 00:12:05.839 ] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.839 BaseBdev3 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.839 [ 00:12:05.839 { 00:12:05.839 "name": "BaseBdev3", 00:12:05.839 "aliases": [ 00:12:05.839 "328edcdd-1c71-4078-8bd5-a41ff754eebb" 00:12:05.839 ], 00:12:05.839 "product_name": "Malloc disk", 00:12:05.839 "block_size": 512, 00:12:05.839 "num_blocks": 65536, 00:12:05.839 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:05.839 "assigned_rate_limits": { 00:12:05.839 "rw_ios_per_sec": 0, 00:12:05.839 "rw_mbytes_per_sec": 0, 00:12:05.839 "r_mbytes_per_sec": 0, 00:12:05.839 "w_mbytes_per_sec": 0 00:12:05.839 }, 00:12:05.839 "claimed": false, 00:12:05.839 "zoned": false, 00:12:05.839 "supported_io_types": { 00:12:05.839 "read": true, 00:12:05.839 "write": true, 00:12:05.839 "unmap": true, 00:12:05.839 "flush": true, 00:12:05.839 "reset": true, 00:12:05.839 "nvme_admin": false, 00:12:05.839 "nvme_io": false, 00:12:05.839 "nvme_io_md": false, 00:12:05.839 "write_zeroes": true, 00:12:05.839 "zcopy": true, 00:12:05.839 "get_zone_info": false, 00:12:05.839 "zone_management": false, 00:12:05.839 "zone_append": false, 00:12:05.839 "compare": false, 00:12:05.839 "compare_and_write": false, 00:12:05.839 "abort": true, 00:12:05.839 "seek_hole": false, 00:12:05.839 "seek_data": false, 00:12:05.839 "copy": true, 00:12:05.839 "nvme_iov_md": false 00:12:05.839 }, 00:12:05.839 "memory_domains": [ 00:12:05.839 { 00:12:05.839 "dma_device_id": "system", 00:12:05.839 "dma_device_type": 1 00:12:05.839 }, 00:12:05.839 { 00:12:05.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.839 "dma_device_type": 2 00:12:05.839 } 00:12:05.839 ], 00:12:05.839 "driver_specific": {} 00:12:05.839 } 00:12:05.839 ] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.839 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 BaseBdev4 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 [ 00:12:06.098 { 00:12:06.098 "name": "BaseBdev4", 00:12:06.098 "aliases": [ 00:12:06.098 "f90c561a-e457-4659-a30e-789d54141641" 00:12:06.098 ], 00:12:06.098 "product_name": "Malloc disk", 00:12:06.098 "block_size": 512, 00:12:06.098 "num_blocks": 65536, 00:12:06.098 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:06.098 "assigned_rate_limits": { 00:12:06.098 "rw_ios_per_sec": 0, 00:12:06.098 "rw_mbytes_per_sec": 0, 00:12:06.098 "r_mbytes_per_sec": 0, 00:12:06.098 "w_mbytes_per_sec": 0 00:12:06.098 }, 00:12:06.098 "claimed": false, 00:12:06.098 "zoned": false, 00:12:06.098 "supported_io_types": { 00:12:06.098 "read": true, 00:12:06.098 "write": true, 00:12:06.098 "unmap": true, 00:12:06.098 "flush": true, 00:12:06.098 "reset": true, 00:12:06.098 "nvme_admin": false, 00:12:06.098 "nvme_io": false, 00:12:06.098 "nvme_io_md": false, 00:12:06.098 "write_zeroes": true, 00:12:06.098 "zcopy": true, 00:12:06.098 "get_zone_info": false, 00:12:06.098 "zone_management": false, 00:12:06.098 "zone_append": false, 00:12:06.098 "compare": false, 00:12:06.098 "compare_and_write": false, 00:12:06.098 "abort": true, 00:12:06.098 "seek_hole": false, 00:12:06.098 "seek_data": false, 00:12:06.098 "copy": true, 00:12:06.098 "nvme_iov_md": false 00:12:06.098 }, 00:12:06.098 "memory_domains": [ 00:12:06.098 { 00:12:06.098 "dma_device_id": "system", 00:12:06.098 "dma_device_type": 1 00:12:06.098 }, 00:12:06.098 { 00:12:06.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.098 "dma_device_type": 2 00:12:06.098 } 00:12:06.098 ], 00:12:06.098 "driver_specific": {} 00:12:06.098 } 00:12:06.098 ] 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.098 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.098 [2024-11-18 13:28:35.946718] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.098 [2024-11-18 13:28:35.946818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.098 [2024-11-18 13:28:35.946881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.098 [2024-11-18 13:28:35.949009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.099 [2024-11-18 13:28:35.949100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.099 "name": "Existed_Raid", 00:12:06.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.099 "strip_size_kb": 0, 00:12:06.099 "state": "configuring", 00:12:06.099 "raid_level": "raid1", 00:12:06.099 "superblock": false, 00:12:06.099 "num_base_bdevs": 4, 00:12:06.099 "num_base_bdevs_discovered": 3, 00:12:06.099 "num_base_bdevs_operational": 4, 00:12:06.099 "base_bdevs_list": [ 00:12:06.099 { 00:12:06.099 "name": "BaseBdev1", 00:12:06.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.099 "is_configured": false, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 0 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev2", 00:12:06.099 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:06.099 "is_configured": true, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 65536 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev3", 00:12:06.099 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:06.099 "is_configured": true, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 65536 00:12:06.099 }, 00:12:06.099 { 00:12:06.099 "name": "BaseBdev4", 00:12:06.099 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:06.099 "is_configured": true, 00:12:06.099 "data_offset": 0, 00:12:06.099 "data_size": 65536 00:12:06.099 } 00:12:06.099 ] 00:12:06.099 }' 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.099 13:28:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 [2024-11-18 13:28:36.461895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.666 "name": "Existed_Raid", 00:12:06.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.666 "strip_size_kb": 0, 00:12:06.666 "state": "configuring", 00:12:06.666 "raid_level": "raid1", 00:12:06.666 "superblock": false, 00:12:06.666 "num_base_bdevs": 4, 00:12:06.666 "num_base_bdevs_discovered": 2, 00:12:06.666 "num_base_bdevs_operational": 4, 00:12:06.666 "base_bdevs_list": [ 00:12:06.666 { 00:12:06.666 "name": "BaseBdev1", 00:12:06.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.666 "is_configured": false, 00:12:06.666 "data_offset": 0, 00:12:06.666 "data_size": 0 00:12:06.666 }, 00:12:06.666 { 00:12:06.666 "name": null, 00:12:06.666 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:06.666 "is_configured": false, 00:12:06.666 "data_offset": 0, 00:12:06.666 "data_size": 65536 00:12:06.666 }, 00:12:06.666 { 00:12:06.666 "name": "BaseBdev3", 00:12:06.666 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:06.666 "is_configured": true, 00:12:06.666 "data_offset": 0, 00:12:06.666 "data_size": 65536 00:12:06.666 }, 00:12:06.666 { 00:12:06.666 "name": "BaseBdev4", 00:12:06.666 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:06.666 "is_configured": true, 00:12:06.666 "data_offset": 0, 00:12:06.666 "data_size": 65536 00:12:06.666 } 00:12:06.666 ] 00:12:06.666 }' 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.666 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.927 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.927 [2024-11-18 13:28:36.978032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.187 BaseBdev1 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.187 13:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.187 [ 00:12:07.187 { 00:12:07.187 "name": "BaseBdev1", 00:12:07.187 "aliases": [ 00:12:07.187 "5a5bc905-3656-4d4b-b1b8-c869081d6a86" 00:12:07.187 ], 00:12:07.187 "product_name": "Malloc disk", 00:12:07.187 "block_size": 512, 00:12:07.187 "num_blocks": 65536, 00:12:07.187 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:07.187 "assigned_rate_limits": { 00:12:07.187 "rw_ios_per_sec": 0, 00:12:07.187 "rw_mbytes_per_sec": 0, 00:12:07.187 "r_mbytes_per_sec": 0, 00:12:07.187 "w_mbytes_per_sec": 0 00:12:07.187 }, 00:12:07.187 "claimed": true, 00:12:07.187 "claim_type": "exclusive_write", 00:12:07.187 "zoned": false, 00:12:07.187 "supported_io_types": { 00:12:07.187 "read": true, 00:12:07.187 "write": true, 00:12:07.187 "unmap": true, 00:12:07.187 "flush": true, 00:12:07.187 "reset": true, 00:12:07.187 "nvme_admin": false, 00:12:07.187 "nvme_io": false, 00:12:07.187 "nvme_io_md": false, 00:12:07.187 "write_zeroes": true, 00:12:07.187 "zcopy": true, 00:12:07.187 "get_zone_info": false, 00:12:07.187 "zone_management": false, 00:12:07.187 "zone_append": false, 00:12:07.187 "compare": false, 00:12:07.187 "compare_and_write": false, 00:12:07.187 "abort": true, 00:12:07.187 "seek_hole": false, 00:12:07.187 "seek_data": false, 00:12:07.187 "copy": true, 00:12:07.187 "nvme_iov_md": false 00:12:07.187 }, 00:12:07.187 "memory_domains": [ 00:12:07.187 { 00:12:07.187 "dma_device_id": "system", 00:12:07.187 "dma_device_type": 1 00:12:07.187 }, 00:12:07.187 { 00:12:07.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.188 "dma_device_type": 2 00:12:07.188 } 00:12:07.188 ], 00:12:07.188 "driver_specific": {} 00:12:07.188 } 00:12:07.188 ] 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.188 "name": "Existed_Raid", 00:12:07.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.188 "strip_size_kb": 0, 00:12:07.188 "state": "configuring", 00:12:07.188 "raid_level": "raid1", 00:12:07.188 "superblock": false, 00:12:07.188 "num_base_bdevs": 4, 00:12:07.188 "num_base_bdevs_discovered": 3, 00:12:07.188 "num_base_bdevs_operational": 4, 00:12:07.188 "base_bdevs_list": [ 00:12:07.188 { 00:12:07.188 "name": "BaseBdev1", 00:12:07.188 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:07.188 "is_configured": true, 00:12:07.188 "data_offset": 0, 00:12:07.188 "data_size": 65536 00:12:07.188 }, 00:12:07.188 { 00:12:07.188 "name": null, 00:12:07.188 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:07.188 "is_configured": false, 00:12:07.188 "data_offset": 0, 00:12:07.188 "data_size": 65536 00:12:07.188 }, 00:12:07.188 { 00:12:07.188 "name": "BaseBdev3", 00:12:07.188 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:07.188 "is_configured": true, 00:12:07.188 "data_offset": 0, 00:12:07.188 "data_size": 65536 00:12:07.188 }, 00:12:07.188 { 00:12:07.188 "name": "BaseBdev4", 00:12:07.188 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:07.188 "is_configured": true, 00:12:07.188 "data_offset": 0, 00:12:07.188 "data_size": 65536 00:12:07.188 } 00:12:07.188 ] 00:12:07.188 }' 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.188 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.447 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.447 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.447 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.447 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.706 [2024-11-18 13:28:37.533181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.706 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.706 "name": "Existed_Raid", 00:12:07.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.706 "strip_size_kb": 0, 00:12:07.706 "state": "configuring", 00:12:07.706 "raid_level": "raid1", 00:12:07.706 "superblock": false, 00:12:07.706 "num_base_bdevs": 4, 00:12:07.706 "num_base_bdevs_discovered": 2, 00:12:07.706 "num_base_bdevs_operational": 4, 00:12:07.706 "base_bdevs_list": [ 00:12:07.706 { 00:12:07.706 "name": "BaseBdev1", 00:12:07.707 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:07.707 "is_configured": true, 00:12:07.707 "data_offset": 0, 00:12:07.707 "data_size": 65536 00:12:07.707 }, 00:12:07.707 { 00:12:07.707 "name": null, 00:12:07.707 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:07.707 "is_configured": false, 00:12:07.707 "data_offset": 0, 00:12:07.707 "data_size": 65536 00:12:07.707 }, 00:12:07.707 { 00:12:07.707 "name": null, 00:12:07.707 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:07.707 "is_configured": false, 00:12:07.707 "data_offset": 0, 00:12:07.707 "data_size": 65536 00:12:07.707 }, 00:12:07.707 { 00:12:07.707 "name": "BaseBdev4", 00:12:07.707 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:07.707 "is_configured": true, 00:12:07.707 "data_offset": 0, 00:12:07.707 "data_size": 65536 00:12:07.707 } 00:12:07.707 ] 00:12:07.707 }' 00:12:07.707 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.707 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.967 [2024-11-18 13:28:37.996389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.967 13:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.967 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.227 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.227 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.227 "name": "Existed_Raid", 00:12:08.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.227 "strip_size_kb": 0, 00:12:08.227 "state": "configuring", 00:12:08.227 "raid_level": "raid1", 00:12:08.227 "superblock": false, 00:12:08.227 "num_base_bdevs": 4, 00:12:08.227 "num_base_bdevs_discovered": 3, 00:12:08.227 "num_base_bdevs_operational": 4, 00:12:08.227 "base_bdevs_list": [ 00:12:08.227 { 00:12:08.227 "name": "BaseBdev1", 00:12:08.227 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:08.227 "is_configured": true, 00:12:08.227 "data_offset": 0, 00:12:08.227 "data_size": 65536 00:12:08.227 }, 00:12:08.227 { 00:12:08.227 "name": null, 00:12:08.227 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:08.227 "is_configured": false, 00:12:08.227 "data_offset": 0, 00:12:08.227 "data_size": 65536 00:12:08.227 }, 00:12:08.227 { 00:12:08.227 "name": "BaseBdev3", 00:12:08.227 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:08.227 "is_configured": true, 00:12:08.227 "data_offset": 0, 00:12:08.227 "data_size": 65536 00:12:08.227 }, 00:12:08.227 { 00:12:08.227 "name": "BaseBdev4", 00:12:08.227 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:08.227 "is_configured": true, 00:12:08.227 "data_offset": 0, 00:12:08.227 "data_size": 65536 00:12:08.227 } 00:12:08.227 ] 00:12:08.227 }' 00:12:08.227 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.227 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.486 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.486 [2024-11-18 13:28:38.471639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.746 "name": "Existed_Raid", 00:12:08.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.746 "strip_size_kb": 0, 00:12:08.746 "state": "configuring", 00:12:08.746 "raid_level": "raid1", 00:12:08.746 "superblock": false, 00:12:08.746 "num_base_bdevs": 4, 00:12:08.746 "num_base_bdevs_discovered": 2, 00:12:08.746 "num_base_bdevs_operational": 4, 00:12:08.746 "base_bdevs_list": [ 00:12:08.746 { 00:12:08.746 "name": null, 00:12:08.746 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:08.746 "is_configured": false, 00:12:08.746 "data_offset": 0, 00:12:08.746 "data_size": 65536 00:12:08.746 }, 00:12:08.746 { 00:12:08.746 "name": null, 00:12:08.746 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:08.746 "is_configured": false, 00:12:08.746 "data_offset": 0, 00:12:08.746 "data_size": 65536 00:12:08.746 }, 00:12:08.746 { 00:12:08.746 "name": "BaseBdev3", 00:12:08.746 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:08.746 "is_configured": true, 00:12:08.746 "data_offset": 0, 00:12:08.746 "data_size": 65536 00:12:08.746 }, 00:12:08.746 { 00:12:08.746 "name": "BaseBdev4", 00:12:08.746 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:08.746 "is_configured": true, 00:12:08.746 "data_offset": 0, 00:12:08.746 "data_size": 65536 00:12:08.746 } 00:12:08.746 ] 00:12:08.746 }' 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.746 13:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.005 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.005 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.005 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.005 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.005 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.273 [2024-11-18 13:28:39.071941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.273 "name": "Existed_Raid", 00:12:09.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.273 "strip_size_kb": 0, 00:12:09.273 "state": "configuring", 00:12:09.273 "raid_level": "raid1", 00:12:09.273 "superblock": false, 00:12:09.273 "num_base_bdevs": 4, 00:12:09.273 "num_base_bdevs_discovered": 3, 00:12:09.273 "num_base_bdevs_operational": 4, 00:12:09.273 "base_bdevs_list": [ 00:12:09.273 { 00:12:09.273 "name": null, 00:12:09.273 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:09.273 "is_configured": false, 00:12:09.273 "data_offset": 0, 00:12:09.273 "data_size": 65536 00:12:09.273 }, 00:12:09.273 { 00:12:09.273 "name": "BaseBdev2", 00:12:09.273 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:09.273 "is_configured": true, 00:12:09.273 "data_offset": 0, 00:12:09.273 "data_size": 65536 00:12:09.273 }, 00:12:09.273 { 00:12:09.273 "name": "BaseBdev3", 00:12:09.273 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:09.273 "is_configured": true, 00:12:09.273 "data_offset": 0, 00:12:09.273 "data_size": 65536 00:12:09.273 }, 00:12:09.273 { 00:12:09.273 "name": "BaseBdev4", 00:12:09.273 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:09.273 "is_configured": true, 00:12:09.273 "data_offset": 0, 00:12:09.273 "data_size": 65536 00:12:09.273 } 00:12:09.273 ] 00:12:09.273 }' 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.273 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.532 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5a5bc905-3656-4d4b-b1b8-c869081d6a86 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.791 [2024-11-18 13:28:39.629288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.791 [2024-11-18 13:28:39.629336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.791 [2024-11-18 13:28:39.629346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.791 [2024-11-18 13:28:39.629650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:09.791 [2024-11-18 13:28:39.629834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.791 [2024-11-18 13:28:39.629845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.791 [2024-11-18 13:28:39.630160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.791 NewBaseBdev 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.791 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.792 [ 00:12:09.792 { 00:12:09.792 "name": "NewBaseBdev", 00:12:09.792 "aliases": [ 00:12:09.792 "5a5bc905-3656-4d4b-b1b8-c869081d6a86" 00:12:09.792 ], 00:12:09.792 "product_name": "Malloc disk", 00:12:09.792 "block_size": 512, 00:12:09.792 "num_blocks": 65536, 00:12:09.792 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:09.792 "assigned_rate_limits": { 00:12:09.792 "rw_ios_per_sec": 0, 00:12:09.792 "rw_mbytes_per_sec": 0, 00:12:09.792 "r_mbytes_per_sec": 0, 00:12:09.792 "w_mbytes_per_sec": 0 00:12:09.792 }, 00:12:09.792 "claimed": true, 00:12:09.792 "claim_type": "exclusive_write", 00:12:09.792 "zoned": false, 00:12:09.792 "supported_io_types": { 00:12:09.792 "read": true, 00:12:09.792 "write": true, 00:12:09.792 "unmap": true, 00:12:09.792 "flush": true, 00:12:09.792 "reset": true, 00:12:09.792 "nvme_admin": false, 00:12:09.792 "nvme_io": false, 00:12:09.792 "nvme_io_md": false, 00:12:09.792 "write_zeroes": true, 00:12:09.792 "zcopy": true, 00:12:09.792 "get_zone_info": false, 00:12:09.792 "zone_management": false, 00:12:09.792 "zone_append": false, 00:12:09.792 "compare": false, 00:12:09.792 "compare_and_write": false, 00:12:09.792 "abort": true, 00:12:09.792 "seek_hole": false, 00:12:09.792 "seek_data": false, 00:12:09.792 "copy": true, 00:12:09.792 "nvme_iov_md": false 00:12:09.792 }, 00:12:09.792 "memory_domains": [ 00:12:09.792 { 00:12:09.792 "dma_device_id": "system", 00:12:09.792 "dma_device_type": 1 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.792 "dma_device_type": 2 00:12:09.792 } 00:12:09.792 ], 00:12:09.792 "driver_specific": {} 00:12:09.792 } 00:12:09.792 ] 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.792 "name": "Existed_Raid", 00:12:09.792 "uuid": "137cc4cf-0cd2-41a6-ac71-ec70839495e8", 00:12:09.792 "strip_size_kb": 0, 00:12:09.792 "state": "online", 00:12:09.792 "raid_level": "raid1", 00:12:09.792 "superblock": false, 00:12:09.792 "num_base_bdevs": 4, 00:12:09.792 "num_base_bdevs_discovered": 4, 00:12:09.792 "num_base_bdevs_operational": 4, 00:12:09.792 "base_bdevs_list": [ 00:12:09.792 { 00:12:09.792 "name": "NewBaseBdev", 00:12:09.792 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 0, 00:12:09.792 "data_size": 65536 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "name": "BaseBdev2", 00:12:09.792 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 0, 00:12:09.792 "data_size": 65536 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "name": "BaseBdev3", 00:12:09.792 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 0, 00:12:09.792 "data_size": 65536 00:12:09.792 }, 00:12:09.792 { 00:12:09.792 "name": "BaseBdev4", 00:12:09.792 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:09.792 "is_configured": true, 00:12:09.792 "data_offset": 0, 00:12:09.792 "data_size": 65536 00:12:09.792 } 00:12:09.792 ] 00:12:09.792 }' 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.792 13:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.051 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.051 [2024-11-18 13:28:40.100942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.311 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.311 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.311 "name": "Existed_Raid", 00:12:10.311 "aliases": [ 00:12:10.311 "137cc4cf-0cd2-41a6-ac71-ec70839495e8" 00:12:10.311 ], 00:12:10.311 "product_name": "Raid Volume", 00:12:10.311 "block_size": 512, 00:12:10.311 "num_blocks": 65536, 00:12:10.311 "uuid": "137cc4cf-0cd2-41a6-ac71-ec70839495e8", 00:12:10.311 "assigned_rate_limits": { 00:12:10.311 "rw_ios_per_sec": 0, 00:12:10.311 "rw_mbytes_per_sec": 0, 00:12:10.311 "r_mbytes_per_sec": 0, 00:12:10.311 "w_mbytes_per_sec": 0 00:12:10.311 }, 00:12:10.311 "claimed": false, 00:12:10.311 "zoned": false, 00:12:10.311 "supported_io_types": { 00:12:10.311 "read": true, 00:12:10.311 "write": true, 00:12:10.311 "unmap": false, 00:12:10.311 "flush": false, 00:12:10.311 "reset": true, 00:12:10.311 "nvme_admin": false, 00:12:10.311 "nvme_io": false, 00:12:10.311 "nvme_io_md": false, 00:12:10.311 "write_zeroes": true, 00:12:10.311 "zcopy": false, 00:12:10.311 "get_zone_info": false, 00:12:10.311 "zone_management": false, 00:12:10.311 "zone_append": false, 00:12:10.311 "compare": false, 00:12:10.311 "compare_and_write": false, 00:12:10.311 "abort": false, 00:12:10.311 "seek_hole": false, 00:12:10.311 "seek_data": false, 00:12:10.311 "copy": false, 00:12:10.311 "nvme_iov_md": false 00:12:10.311 }, 00:12:10.311 "memory_domains": [ 00:12:10.311 { 00:12:10.311 "dma_device_id": "system", 00:12:10.311 "dma_device_type": 1 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.311 "dma_device_type": 2 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "system", 00:12:10.311 "dma_device_type": 1 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.311 "dma_device_type": 2 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "system", 00:12:10.311 "dma_device_type": 1 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.311 "dma_device_type": 2 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "system", 00:12:10.311 "dma_device_type": 1 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.311 "dma_device_type": 2 00:12:10.311 } 00:12:10.311 ], 00:12:10.311 "driver_specific": { 00:12:10.311 "raid": { 00:12:10.311 "uuid": "137cc4cf-0cd2-41a6-ac71-ec70839495e8", 00:12:10.311 "strip_size_kb": 0, 00:12:10.311 "state": "online", 00:12:10.311 "raid_level": "raid1", 00:12:10.311 "superblock": false, 00:12:10.311 "num_base_bdevs": 4, 00:12:10.311 "num_base_bdevs_discovered": 4, 00:12:10.311 "num_base_bdevs_operational": 4, 00:12:10.311 "base_bdevs_list": [ 00:12:10.311 { 00:12:10.311 "name": "NewBaseBdev", 00:12:10.311 "uuid": "5a5bc905-3656-4d4b-b1b8-c869081d6a86", 00:12:10.311 "is_configured": true, 00:12:10.311 "data_offset": 0, 00:12:10.311 "data_size": 65536 00:12:10.311 }, 00:12:10.311 { 00:12:10.311 "name": "BaseBdev2", 00:12:10.311 "uuid": "7fe47a01-6243-49d7-9254-63c24e824301", 00:12:10.311 "is_configured": true, 00:12:10.311 "data_offset": 0, 00:12:10.311 "data_size": 65536 00:12:10.312 }, 00:12:10.312 { 00:12:10.312 "name": "BaseBdev3", 00:12:10.312 "uuid": "328edcdd-1c71-4078-8bd5-a41ff754eebb", 00:12:10.312 "is_configured": true, 00:12:10.312 "data_offset": 0, 00:12:10.312 "data_size": 65536 00:12:10.312 }, 00:12:10.312 { 00:12:10.312 "name": "BaseBdev4", 00:12:10.312 "uuid": "f90c561a-e457-4659-a30e-789d54141641", 00:12:10.312 "is_configured": true, 00:12:10.312 "data_offset": 0, 00:12:10.312 "data_size": 65536 00:12:10.312 } 00:12:10.312 ] 00:12:10.312 } 00:12:10.312 } 00:12:10.312 }' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.312 BaseBdev2 00:12:10.312 BaseBdev3 00:12:10.312 BaseBdev4' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.312 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 [2024-11-18 13:28:40.427934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.571 [2024-11-18 13:28:40.427967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.571 [2024-11-18 13:28:40.428058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.571 [2024-11-18 13:28:40.428397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.571 [2024-11-18 13:28:40.428413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73188 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73188 ']' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73188 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73188 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73188' 00:12:10.571 killing process with pid 73188 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73188 00:12:10.571 [2024-11-18 13:28:40.463944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.571 13:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73188 00:12:11.140 [2024-11-18 13:28:40.886265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.077 13:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.077 00:12:12.077 real 0m11.873s 00:12:12.077 user 0m18.522s 00:12:12.077 sys 0m2.302s 00:12:12.077 13:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.077 13:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.077 ************************************ 00:12:12.077 END TEST raid_state_function_test 00:12:12.077 ************************************ 00:12:12.338 13:28:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:12.338 13:28:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:12.338 13:28:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.338 13:28:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.338 ************************************ 00:12:12.338 START TEST raid_state_function_test_sb 00:12:12.338 ************************************ 00:12:12.338 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73855 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73855' 00:12:12.339 Process raid pid: 73855 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73855 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73855 ']' 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.339 13:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.339 [2024-11-18 13:28:42.255255] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:12.339 [2024-11-18 13:28:42.255446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.599 [2024-11-18 13:28:42.433993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.599 [2024-11-18 13:28:42.570568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.859 [2024-11-18 13:28:42.806723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.859 [2024-11-18 13:28:42.806770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.120 [2024-11-18 13:28:43.135019] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.120 [2024-11-18 13:28:43.135080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.120 [2024-11-18 13:28:43.135092] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.120 [2024-11-18 13:28:43.135102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.120 [2024-11-18 13:28:43.135109] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.120 [2024-11-18 13:28:43.135118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.120 [2024-11-18 13:28:43.135147] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.120 [2024-11-18 13:28:43.135157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.120 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.385 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.385 "name": "Existed_Raid", 00:12:13.385 "uuid": "fab85b71-b1ad-4a8d-a65c-6912bc462a2b", 00:12:13.385 "strip_size_kb": 0, 00:12:13.385 "state": "configuring", 00:12:13.385 "raid_level": "raid1", 00:12:13.385 "superblock": true, 00:12:13.385 "num_base_bdevs": 4, 00:12:13.385 "num_base_bdevs_discovered": 0, 00:12:13.385 "num_base_bdevs_operational": 4, 00:12:13.385 "base_bdevs_list": [ 00:12:13.385 { 00:12:13.385 "name": "BaseBdev1", 00:12:13.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.385 "is_configured": false, 00:12:13.385 "data_offset": 0, 00:12:13.385 "data_size": 0 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "name": "BaseBdev2", 00:12:13.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.385 "is_configured": false, 00:12:13.385 "data_offset": 0, 00:12:13.385 "data_size": 0 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "name": "BaseBdev3", 00:12:13.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.385 "is_configured": false, 00:12:13.385 "data_offset": 0, 00:12:13.385 "data_size": 0 00:12:13.385 }, 00:12:13.385 { 00:12:13.385 "name": "BaseBdev4", 00:12:13.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.385 "is_configured": false, 00:12:13.385 "data_offset": 0, 00:12:13.385 "data_size": 0 00:12:13.385 } 00:12:13.385 ] 00:12:13.385 }' 00:12:13.385 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.385 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.656 [2024-11-18 13:28:43.578312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.656 [2024-11-18 13:28:43.578428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.656 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.656 [2024-11-18 13:28:43.590264] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.656 [2024-11-18 13:28:43.590362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.656 [2024-11-18 13:28:43.590393] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.656 [2024-11-18 13:28:43.590418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.656 [2024-11-18 13:28:43.590437] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.656 [2024-11-18 13:28:43.590459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.656 [2024-11-18 13:28:43.590478] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.657 [2024-11-18 13:28:43.590500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.657 [2024-11-18 13:28:43.642805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.657 BaseBdev1 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.657 [ 00:12:13.657 { 00:12:13.657 "name": "BaseBdev1", 00:12:13.657 "aliases": [ 00:12:13.657 "75254bc9-a744-4f52-b451-c46428d8d589" 00:12:13.657 ], 00:12:13.657 "product_name": "Malloc disk", 00:12:13.657 "block_size": 512, 00:12:13.657 "num_blocks": 65536, 00:12:13.657 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:13.657 "assigned_rate_limits": { 00:12:13.657 "rw_ios_per_sec": 0, 00:12:13.657 "rw_mbytes_per_sec": 0, 00:12:13.657 "r_mbytes_per_sec": 0, 00:12:13.657 "w_mbytes_per_sec": 0 00:12:13.657 }, 00:12:13.657 "claimed": true, 00:12:13.657 "claim_type": "exclusive_write", 00:12:13.657 "zoned": false, 00:12:13.657 "supported_io_types": { 00:12:13.657 "read": true, 00:12:13.657 "write": true, 00:12:13.657 "unmap": true, 00:12:13.657 "flush": true, 00:12:13.657 "reset": true, 00:12:13.657 "nvme_admin": false, 00:12:13.657 "nvme_io": false, 00:12:13.657 "nvme_io_md": false, 00:12:13.657 "write_zeroes": true, 00:12:13.657 "zcopy": true, 00:12:13.657 "get_zone_info": false, 00:12:13.657 "zone_management": false, 00:12:13.657 "zone_append": false, 00:12:13.657 "compare": false, 00:12:13.657 "compare_and_write": false, 00:12:13.657 "abort": true, 00:12:13.657 "seek_hole": false, 00:12:13.657 "seek_data": false, 00:12:13.657 "copy": true, 00:12:13.657 "nvme_iov_md": false 00:12:13.657 }, 00:12:13.657 "memory_domains": [ 00:12:13.657 { 00:12:13.657 "dma_device_id": "system", 00:12:13.657 "dma_device_type": 1 00:12:13.657 }, 00:12:13.657 { 00:12:13.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.657 "dma_device_type": 2 00:12:13.657 } 00:12:13.657 ], 00:12:13.657 "driver_specific": {} 00:12:13.657 } 00:12:13.657 ] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.657 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.927 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.927 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.927 "name": "Existed_Raid", 00:12:13.927 "uuid": "52a61ab2-4c02-4130-adb1-3c3da4767243", 00:12:13.927 "strip_size_kb": 0, 00:12:13.927 "state": "configuring", 00:12:13.927 "raid_level": "raid1", 00:12:13.927 "superblock": true, 00:12:13.927 "num_base_bdevs": 4, 00:12:13.927 "num_base_bdevs_discovered": 1, 00:12:13.927 "num_base_bdevs_operational": 4, 00:12:13.927 "base_bdevs_list": [ 00:12:13.927 { 00:12:13.927 "name": "BaseBdev1", 00:12:13.927 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:13.927 "is_configured": true, 00:12:13.927 "data_offset": 2048, 00:12:13.927 "data_size": 63488 00:12:13.927 }, 00:12:13.927 { 00:12:13.927 "name": "BaseBdev2", 00:12:13.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.927 "is_configured": false, 00:12:13.927 "data_offset": 0, 00:12:13.927 "data_size": 0 00:12:13.927 }, 00:12:13.927 { 00:12:13.927 "name": "BaseBdev3", 00:12:13.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.927 "is_configured": false, 00:12:13.928 "data_offset": 0, 00:12:13.928 "data_size": 0 00:12:13.928 }, 00:12:13.928 { 00:12:13.928 "name": "BaseBdev4", 00:12:13.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.928 "is_configured": false, 00:12:13.928 "data_offset": 0, 00:12:13.928 "data_size": 0 00:12:13.928 } 00:12:13.928 ] 00:12:13.928 }' 00:12:13.928 13:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.928 13:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 [2024-11-18 13:28:44.086173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.192 [2024-11-18 13:28:44.086230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 [2024-11-18 13:28:44.098228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.192 [2024-11-18 13:28:44.100587] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.192 [2024-11-18 13:28:44.100679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.192 [2024-11-18 13:28:44.100695] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.192 [2024-11-18 13:28:44.100707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.192 [2024-11-18 13:28:44.100713] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:14.192 [2024-11-18 13:28:44.100722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.192 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.192 "name": "Existed_Raid", 00:12:14.192 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:14.192 "strip_size_kb": 0, 00:12:14.192 "state": "configuring", 00:12:14.192 "raid_level": "raid1", 00:12:14.192 "superblock": true, 00:12:14.192 "num_base_bdevs": 4, 00:12:14.192 "num_base_bdevs_discovered": 1, 00:12:14.192 "num_base_bdevs_operational": 4, 00:12:14.192 "base_bdevs_list": [ 00:12:14.192 { 00:12:14.192 "name": "BaseBdev1", 00:12:14.192 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:14.192 "is_configured": true, 00:12:14.192 "data_offset": 2048, 00:12:14.192 "data_size": 63488 00:12:14.192 }, 00:12:14.192 { 00:12:14.192 "name": "BaseBdev2", 00:12:14.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.192 "is_configured": false, 00:12:14.192 "data_offset": 0, 00:12:14.192 "data_size": 0 00:12:14.192 }, 00:12:14.192 { 00:12:14.192 "name": "BaseBdev3", 00:12:14.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.192 "is_configured": false, 00:12:14.192 "data_offset": 0, 00:12:14.192 "data_size": 0 00:12:14.192 }, 00:12:14.192 { 00:12:14.192 "name": "BaseBdev4", 00:12:14.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.192 "is_configured": false, 00:12:14.192 "data_offset": 0, 00:12:14.192 "data_size": 0 00:12:14.192 } 00:12:14.192 ] 00:12:14.192 }' 00:12:14.193 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.193 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.761 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 [2024-11-18 13:28:44.645658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.762 BaseBdev2 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 [ 00:12:14.762 { 00:12:14.762 "name": "BaseBdev2", 00:12:14.762 "aliases": [ 00:12:14.762 "10d16458-2513-4af7-9cc6-f8a51225d420" 00:12:14.762 ], 00:12:14.762 "product_name": "Malloc disk", 00:12:14.762 "block_size": 512, 00:12:14.762 "num_blocks": 65536, 00:12:14.762 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:14.762 "assigned_rate_limits": { 00:12:14.762 "rw_ios_per_sec": 0, 00:12:14.762 "rw_mbytes_per_sec": 0, 00:12:14.762 "r_mbytes_per_sec": 0, 00:12:14.762 "w_mbytes_per_sec": 0 00:12:14.762 }, 00:12:14.762 "claimed": true, 00:12:14.762 "claim_type": "exclusive_write", 00:12:14.762 "zoned": false, 00:12:14.762 "supported_io_types": { 00:12:14.762 "read": true, 00:12:14.762 "write": true, 00:12:14.762 "unmap": true, 00:12:14.762 "flush": true, 00:12:14.762 "reset": true, 00:12:14.762 "nvme_admin": false, 00:12:14.762 "nvme_io": false, 00:12:14.762 "nvme_io_md": false, 00:12:14.762 "write_zeroes": true, 00:12:14.762 "zcopy": true, 00:12:14.762 "get_zone_info": false, 00:12:14.762 "zone_management": false, 00:12:14.762 "zone_append": false, 00:12:14.762 "compare": false, 00:12:14.762 "compare_and_write": false, 00:12:14.762 "abort": true, 00:12:14.762 "seek_hole": false, 00:12:14.762 "seek_data": false, 00:12:14.762 "copy": true, 00:12:14.762 "nvme_iov_md": false 00:12:14.762 }, 00:12:14.762 "memory_domains": [ 00:12:14.762 { 00:12:14.762 "dma_device_id": "system", 00:12:14.762 "dma_device_type": 1 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.762 "dma_device_type": 2 00:12:14.762 } 00:12:14.762 ], 00:12:14.762 "driver_specific": {} 00:12:14.762 } 00:12:14.762 ] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.762 "name": "Existed_Raid", 00:12:14.762 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:14.762 "strip_size_kb": 0, 00:12:14.762 "state": "configuring", 00:12:14.762 "raid_level": "raid1", 00:12:14.762 "superblock": true, 00:12:14.762 "num_base_bdevs": 4, 00:12:14.762 "num_base_bdevs_discovered": 2, 00:12:14.762 "num_base_bdevs_operational": 4, 00:12:14.762 "base_bdevs_list": [ 00:12:14.762 { 00:12:14.762 "name": "BaseBdev1", 00:12:14.762 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev2", 00:12:14.762 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev3", 00:12:14.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.762 "is_configured": false, 00:12:14.762 "data_offset": 0, 00:12:14.762 "data_size": 0 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev4", 00:12:14.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.762 "is_configured": false, 00:12:14.762 "data_offset": 0, 00:12:14.762 "data_size": 0 00:12:14.762 } 00:12:14.762 ] 00:12:14.762 }' 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.762 13:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.331 [2024-11-18 13:28:45.222046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.331 BaseBdev3 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.331 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.331 [ 00:12:15.331 { 00:12:15.331 "name": "BaseBdev3", 00:12:15.331 "aliases": [ 00:12:15.331 "b8eaea4b-b9ff-4c84-823d-267056dc96c3" 00:12:15.331 ], 00:12:15.331 "product_name": "Malloc disk", 00:12:15.331 "block_size": 512, 00:12:15.331 "num_blocks": 65536, 00:12:15.331 "uuid": "b8eaea4b-b9ff-4c84-823d-267056dc96c3", 00:12:15.331 "assigned_rate_limits": { 00:12:15.331 "rw_ios_per_sec": 0, 00:12:15.331 "rw_mbytes_per_sec": 0, 00:12:15.331 "r_mbytes_per_sec": 0, 00:12:15.331 "w_mbytes_per_sec": 0 00:12:15.331 }, 00:12:15.331 "claimed": true, 00:12:15.331 "claim_type": "exclusive_write", 00:12:15.331 "zoned": false, 00:12:15.331 "supported_io_types": { 00:12:15.331 "read": true, 00:12:15.331 "write": true, 00:12:15.331 "unmap": true, 00:12:15.331 "flush": true, 00:12:15.331 "reset": true, 00:12:15.331 "nvme_admin": false, 00:12:15.331 "nvme_io": false, 00:12:15.331 "nvme_io_md": false, 00:12:15.331 "write_zeroes": true, 00:12:15.331 "zcopy": true, 00:12:15.331 "get_zone_info": false, 00:12:15.331 "zone_management": false, 00:12:15.331 "zone_append": false, 00:12:15.331 "compare": false, 00:12:15.331 "compare_and_write": false, 00:12:15.331 "abort": true, 00:12:15.331 "seek_hole": false, 00:12:15.331 "seek_data": false, 00:12:15.331 "copy": true, 00:12:15.331 "nvme_iov_md": false 00:12:15.331 }, 00:12:15.331 "memory_domains": [ 00:12:15.331 { 00:12:15.331 "dma_device_id": "system", 00:12:15.331 "dma_device_type": 1 00:12:15.331 }, 00:12:15.331 { 00:12:15.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.331 "dma_device_type": 2 00:12:15.331 } 00:12:15.332 ], 00:12:15.332 "driver_specific": {} 00:12:15.332 } 00:12:15.332 ] 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.332 "name": "Existed_Raid", 00:12:15.332 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:15.332 "strip_size_kb": 0, 00:12:15.332 "state": "configuring", 00:12:15.332 "raid_level": "raid1", 00:12:15.332 "superblock": true, 00:12:15.332 "num_base_bdevs": 4, 00:12:15.332 "num_base_bdevs_discovered": 3, 00:12:15.332 "num_base_bdevs_operational": 4, 00:12:15.332 "base_bdevs_list": [ 00:12:15.332 { 00:12:15.332 "name": "BaseBdev1", 00:12:15.332 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:15.332 "is_configured": true, 00:12:15.332 "data_offset": 2048, 00:12:15.332 "data_size": 63488 00:12:15.332 }, 00:12:15.332 { 00:12:15.332 "name": "BaseBdev2", 00:12:15.332 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:15.332 "is_configured": true, 00:12:15.332 "data_offset": 2048, 00:12:15.332 "data_size": 63488 00:12:15.332 }, 00:12:15.332 { 00:12:15.332 "name": "BaseBdev3", 00:12:15.332 "uuid": "b8eaea4b-b9ff-4c84-823d-267056dc96c3", 00:12:15.332 "is_configured": true, 00:12:15.332 "data_offset": 2048, 00:12:15.332 "data_size": 63488 00:12:15.332 }, 00:12:15.332 { 00:12:15.332 "name": "BaseBdev4", 00:12:15.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.332 "is_configured": false, 00:12:15.332 "data_offset": 0, 00:12:15.332 "data_size": 0 00:12:15.332 } 00:12:15.332 ] 00:12:15.332 }' 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.332 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.900 [2024-11-18 13:28:45.721860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.900 [2024-11-18 13:28:45.722211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.900 [2024-11-18 13:28:45.722227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.900 [2024-11-18 13:28:45.722534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:15.900 BaseBdev4 00:12:15.900 [2024-11-18 13:28:45.722732] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.900 [2024-11-18 13:28:45.722753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:15.900 [2024-11-18 13:28:45.722918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.900 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 [ 00:12:15.901 { 00:12:15.901 "name": "BaseBdev4", 00:12:15.901 "aliases": [ 00:12:15.901 "73d16f3e-44e5-4fdb-8481-ab136d0eac8e" 00:12:15.901 ], 00:12:15.901 "product_name": "Malloc disk", 00:12:15.901 "block_size": 512, 00:12:15.901 "num_blocks": 65536, 00:12:15.901 "uuid": "73d16f3e-44e5-4fdb-8481-ab136d0eac8e", 00:12:15.901 "assigned_rate_limits": { 00:12:15.901 "rw_ios_per_sec": 0, 00:12:15.901 "rw_mbytes_per_sec": 0, 00:12:15.901 "r_mbytes_per_sec": 0, 00:12:15.901 "w_mbytes_per_sec": 0 00:12:15.901 }, 00:12:15.901 "claimed": true, 00:12:15.901 "claim_type": "exclusive_write", 00:12:15.901 "zoned": false, 00:12:15.901 "supported_io_types": { 00:12:15.901 "read": true, 00:12:15.901 "write": true, 00:12:15.901 "unmap": true, 00:12:15.901 "flush": true, 00:12:15.901 "reset": true, 00:12:15.901 "nvme_admin": false, 00:12:15.901 "nvme_io": false, 00:12:15.901 "nvme_io_md": false, 00:12:15.901 "write_zeroes": true, 00:12:15.901 "zcopy": true, 00:12:15.901 "get_zone_info": false, 00:12:15.901 "zone_management": false, 00:12:15.901 "zone_append": false, 00:12:15.901 "compare": false, 00:12:15.901 "compare_and_write": false, 00:12:15.901 "abort": true, 00:12:15.901 "seek_hole": false, 00:12:15.901 "seek_data": false, 00:12:15.901 "copy": true, 00:12:15.901 "nvme_iov_md": false 00:12:15.901 }, 00:12:15.901 "memory_domains": [ 00:12:15.901 { 00:12:15.901 "dma_device_id": "system", 00:12:15.901 "dma_device_type": 1 00:12:15.901 }, 00:12:15.901 { 00:12:15.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.901 "dma_device_type": 2 00:12:15.901 } 00:12:15.901 ], 00:12:15.901 "driver_specific": {} 00:12:15.901 } 00:12:15.901 ] 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.901 "name": "Existed_Raid", 00:12:15.901 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:15.901 "strip_size_kb": 0, 00:12:15.901 "state": "online", 00:12:15.901 "raid_level": "raid1", 00:12:15.901 "superblock": true, 00:12:15.901 "num_base_bdevs": 4, 00:12:15.901 "num_base_bdevs_discovered": 4, 00:12:15.901 "num_base_bdevs_operational": 4, 00:12:15.901 "base_bdevs_list": [ 00:12:15.901 { 00:12:15.901 "name": "BaseBdev1", 00:12:15.901 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:15.901 "is_configured": true, 00:12:15.901 "data_offset": 2048, 00:12:15.901 "data_size": 63488 00:12:15.901 }, 00:12:15.901 { 00:12:15.901 "name": "BaseBdev2", 00:12:15.901 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:15.901 "is_configured": true, 00:12:15.901 "data_offset": 2048, 00:12:15.901 "data_size": 63488 00:12:15.901 }, 00:12:15.901 { 00:12:15.901 "name": "BaseBdev3", 00:12:15.901 "uuid": "b8eaea4b-b9ff-4c84-823d-267056dc96c3", 00:12:15.901 "is_configured": true, 00:12:15.901 "data_offset": 2048, 00:12:15.901 "data_size": 63488 00:12:15.901 }, 00:12:15.901 { 00:12:15.901 "name": "BaseBdev4", 00:12:15.901 "uuid": "73d16f3e-44e5-4fdb-8481-ab136d0eac8e", 00:12:15.901 "is_configured": true, 00:12:15.901 "data_offset": 2048, 00:12:15.901 "data_size": 63488 00:12:15.901 } 00:12:15.901 ] 00:12:15.901 }' 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.901 13:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.470 [2024-11-18 13:28:46.257342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.470 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.470 "name": "Existed_Raid", 00:12:16.470 "aliases": [ 00:12:16.470 "4da72ecf-c73c-4c5f-b056-58fc78fac720" 00:12:16.470 ], 00:12:16.470 "product_name": "Raid Volume", 00:12:16.470 "block_size": 512, 00:12:16.470 "num_blocks": 63488, 00:12:16.470 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:16.470 "assigned_rate_limits": { 00:12:16.470 "rw_ios_per_sec": 0, 00:12:16.470 "rw_mbytes_per_sec": 0, 00:12:16.470 "r_mbytes_per_sec": 0, 00:12:16.470 "w_mbytes_per_sec": 0 00:12:16.470 }, 00:12:16.470 "claimed": false, 00:12:16.470 "zoned": false, 00:12:16.470 "supported_io_types": { 00:12:16.470 "read": true, 00:12:16.470 "write": true, 00:12:16.470 "unmap": false, 00:12:16.470 "flush": false, 00:12:16.470 "reset": true, 00:12:16.470 "nvme_admin": false, 00:12:16.470 "nvme_io": false, 00:12:16.470 "nvme_io_md": false, 00:12:16.470 "write_zeroes": true, 00:12:16.470 "zcopy": false, 00:12:16.470 "get_zone_info": false, 00:12:16.470 "zone_management": false, 00:12:16.470 "zone_append": false, 00:12:16.470 "compare": false, 00:12:16.470 "compare_and_write": false, 00:12:16.470 "abort": false, 00:12:16.470 "seek_hole": false, 00:12:16.470 "seek_data": false, 00:12:16.470 "copy": false, 00:12:16.470 "nvme_iov_md": false 00:12:16.470 }, 00:12:16.470 "memory_domains": [ 00:12:16.470 { 00:12:16.470 "dma_device_id": "system", 00:12:16.470 "dma_device_type": 1 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.470 "dma_device_type": 2 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "system", 00:12:16.470 "dma_device_type": 1 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.470 "dma_device_type": 2 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "system", 00:12:16.470 "dma_device_type": 1 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.470 "dma_device_type": 2 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "system", 00:12:16.470 "dma_device_type": 1 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.470 "dma_device_type": 2 00:12:16.470 } 00:12:16.470 ], 00:12:16.470 "driver_specific": { 00:12:16.470 "raid": { 00:12:16.470 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:16.470 "strip_size_kb": 0, 00:12:16.470 "state": "online", 00:12:16.470 "raid_level": "raid1", 00:12:16.470 "superblock": true, 00:12:16.470 "num_base_bdevs": 4, 00:12:16.470 "num_base_bdevs_discovered": 4, 00:12:16.470 "num_base_bdevs_operational": 4, 00:12:16.470 "base_bdevs_list": [ 00:12:16.470 { 00:12:16.470 "name": "BaseBdev1", 00:12:16.470 "uuid": "75254bc9-a744-4f52-b451-c46428d8d589", 00:12:16.470 "is_configured": true, 00:12:16.470 "data_offset": 2048, 00:12:16.470 "data_size": 63488 00:12:16.470 }, 00:12:16.470 { 00:12:16.470 "name": "BaseBdev2", 00:12:16.470 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:16.470 "is_configured": true, 00:12:16.470 "data_offset": 2048, 00:12:16.470 "data_size": 63488 00:12:16.471 }, 00:12:16.471 { 00:12:16.471 "name": "BaseBdev3", 00:12:16.471 "uuid": "b8eaea4b-b9ff-4c84-823d-267056dc96c3", 00:12:16.471 "is_configured": true, 00:12:16.471 "data_offset": 2048, 00:12:16.471 "data_size": 63488 00:12:16.471 }, 00:12:16.471 { 00:12:16.471 "name": "BaseBdev4", 00:12:16.471 "uuid": "73d16f3e-44e5-4fdb-8481-ab136d0eac8e", 00:12:16.471 "is_configured": true, 00:12:16.471 "data_offset": 2048, 00:12:16.471 "data_size": 63488 00:12:16.471 } 00:12:16.471 ] 00:12:16.471 } 00:12:16.471 } 00:12:16.471 }' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:16.471 BaseBdev2 00:12:16.471 BaseBdev3 00:12:16.471 BaseBdev4' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.471 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 [2024-11-18 13:28:46.604432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.731 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.732 "name": "Existed_Raid", 00:12:16.732 "uuid": "4da72ecf-c73c-4c5f-b056-58fc78fac720", 00:12:16.732 "strip_size_kb": 0, 00:12:16.732 "state": "online", 00:12:16.732 "raid_level": "raid1", 00:12:16.732 "superblock": true, 00:12:16.732 "num_base_bdevs": 4, 00:12:16.732 "num_base_bdevs_discovered": 3, 00:12:16.732 "num_base_bdevs_operational": 3, 00:12:16.732 "base_bdevs_list": [ 00:12:16.732 { 00:12:16.732 "name": null, 00:12:16.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.732 "is_configured": false, 00:12:16.732 "data_offset": 0, 00:12:16.732 "data_size": 63488 00:12:16.732 }, 00:12:16.732 { 00:12:16.732 "name": "BaseBdev2", 00:12:16.732 "uuid": "10d16458-2513-4af7-9cc6-f8a51225d420", 00:12:16.732 "is_configured": true, 00:12:16.732 "data_offset": 2048, 00:12:16.732 "data_size": 63488 00:12:16.732 }, 00:12:16.732 { 00:12:16.732 "name": "BaseBdev3", 00:12:16.732 "uuid": "b8eaea4b-b9ff-4c84-823d-267056dc96c3", 00:12:16.732 "is_configured": true, 00:12:16.732 "data_offset": 2048, 00:12:16.732 "data_size": 63488 00:12:16.732 }, 00:12:16.732 { 00:12:16.732 "name": "BaseBdev4", 00:12:16.732 "uuid": "73d16f3e-44e5-4fdb-8481-ab136d0eac8e", 00:12:16.732 "is_configured": true, 00:12:16.732 "data_offset": 2048, 00:12:16.732 "data_size": 63488 00:12:16.732 } 00:12:16.732 ] 00:12:16.732 }' 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.732 13:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 [2024-11-18 13:28:47.199483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.300 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 [2024-11-18 13:28:47.363720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 [2024-11-18 13:28:47.529336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:17.559 [2024-11-18 13:28:47.529466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.819 [2024-11-18 13:28:47.635102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.819 [2024-11-18 13:28:47.635195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.819 [2024-11-18 13:28:47.635210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.819 BaseBdev2 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.819 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.819 [ 00:12:17.819 { 00:12:17.819 "name": "BaseBdev2", 00:12:17.819 "aliases": [ 00:12:17.819 "74b79eb6-d2df-480f-9c3f-878617283d2a" 00:12:17.819 ], 00:12:17.819 "product_name": "Malloc disk", 00:12:17.819 "block_size": 512, 00:12:17.819 "num_blocks": 65536, 00:12:17.819 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:17.819 "assigned_rate_limits": { 00:12:17.819 "rw_ios_per_sec": 0, 00:12:17.819 "rw_mbytes_per_sec": 0, 00:12:17.819 "r_mbytes_per_sec": 0, 00:12:17.819 "w_mbytes_per_sec": 0 00:12:17.819 }, 00:12:17.819 "claimed": false, 00:12:17.820 "zoned": false, 00:12:17.820 "supported_io_types": { 00:12:17.820 "read": true, 00:12:17.820 "write": true, 00:12:17.820 "unmap": true, 00:12:17.820 "flush": true, 00:12:17.820 "reset": true, 00:12:17.820 "nvme_admin": false, 00:12:17.820 "nvme_io": false, 00:12:17.820 "nvme_io_md": false, 00:12:17.820 "write_zeroes": true, 00:12:17.820 "zcopy": true, 00:12:17.820 "get_zone_info": false, 00:12:17.820 "zone_management": false, 00:12:17.820 "zone_append": false, 00:12:17.820 "compare": false, 00:12:17.820 "compare_and_write": false, 00:12:17.820 "abort": true, 00:12:17.820 "seek_hole": false, 00:12:17.820 "seek_data": false, 00:12:17.820 "copy": true, 00:12:17.820 "nvme_iov_md": false 00:12:17.820 }, 00:12:17.820 "memory_domains": [ 00:12:17.820 { 00:12:17.820 "dma_device_id": "system", 00:12:17.820 "dma_device_type": 1 00:12:17.820 }, 00:12:17.820 { 00:12:17.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.820 "dma_device_type": 2 00:12:17.820 } 00:12:17.820 ], 00:12:17.820 "driver_specific": {} 00:12:17.820 } 00:12:17.820 ] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.820 BaseBdev3 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.820 [ 00:12:17.820 { 00:12:17.820 "name": "BaseBdev3", 00:12:17.820 "aliases": [ 00:12:17.820 "ce018cd8-5fba-415c-a9f9-65dddde63385" 00:12:17.820 ], 00:12:17.820 "product_name": "Malloc disk", 00:12:17.820 "block_size": 512, 00:12:17.820 "num_blocks": 65536, 00:12:17.820 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:17.820 "assigned_rate_limits": { 00:12:17.820 "rw_ios_per_sec": 0, 00:12:17.820 "rw_mbytes_per_sec": 0, 00:12:17.820 "r_mbytes_per_sec": 0, 00:12:17.820 "w_mbytes_per_sec": 0 00:12:17.820 }, 00:12:17.820 "claimed": false, 00:12:17.820 "zoned": false, 00:12:17.820 "supported_io_types": { 00:12:17.820 "read": true, 00:12:17.820 "write": true, 00:12:17.820 "unmap": true, 00:12:17.820 "flush": true, 00:12:17.820 "reset": true, 00:12:17.820 "nvme_admin": false, 00:12:17.820 "nvme_io": false, 00:12:17.820 "nvme_io_md": false, 00:12:17.820 "write_zeroes": true, 00:12:17.820 "zcopy": true, 00:12:17.820 "get_zone_info": false, 00:12:17.820 "zone_management": false, 00:12:17.820 "zone_append": false, 00:12:17.820 "compare": false, 00:12:17.820 "compare_and_write": false, 00:12:17.820 "abort": true, 00:12:17.820 "seek_hole": false, 00:12:17.820 "seek_data": false, 00:12:17.820 "copy": true, 00:12:17.820 "nvme_iov_md": false 00:12:17.820 }, 00:12:17.820 "memory_domains": [ 00:12:17.820 { 00:12:17.820 "dma_device_id": "system", 00:12:17.820 "dma_device_type": 1 00:12:17.820 }, 00:12:17.820 { 00:12:17.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.820 "dma_device_type": 2 00:12:17.820 } 00:12:17.820 ], 00:12:17.820 "driver_specific": {} 00:12:17.820 } 00:12:17.820 ] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.820 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 BaseBdev4 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.080 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.080 [ 00:12:18.080 { 00:12:18.080 "name": "BaseBdev4", 00:12:18.080 "aliases": [ 00:12:18.080 "be13b1db-e58e-4be0-85f8-a33b31a0d64f" 00:12:18.080 ], 00:12:18.080 "product_name": "Malloc disk", 00:12:18.080 "block_size": 512, 00:12:18.080 "num_blocks": 65536, 00:12:18.080 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:18.080 "assigned_rate_limits": { 00:12:18.080 "rw_ios_per_sec": 0, 00:12:18.080 "rw_mbytes_per_sec": 0, 00:12:18.080 "r_mbytes_per_sec": 0, 00:12:18.080 "w_mbytes_per_sec": 0 00:12:18.080 }, 00:12:18.080 "claimed": false, 00:12:18.080 "zoned": false, 00:12:18.080 "supported_io_types": { 00:12:18.080 "read": true, 00:12:18.080 "write": true, 00:12:18.080 "unmap": true, 00:12:18.080 "flush": true, 00:12:18.080 "reset": true, 00:12:18.080 "nvme_admin": false, 00:12:18.080 "nvme_io": false, 00:12:18.080 "nvme_io_md": false, 00:12:18.080 "write_zeroes": true, 00:12:18.080 "zcopy": true, 00:12:18.081 "get_zone_info": false, 00:12:18.081 "zone_management": false, 00:12:18.081 "zone_append": false, 00:12:18.081 "compare": false, 00:12:18.081 "compare_and_write": false, 00:12:18.081 "abort": true, 00:12:18.081 "seek_hole": false, 00:12:18.081 "seek_data": false, 00:12:18.081 "copy": true, 00:12:18.081 "nvme_iov_md": false 00:12:18.081 }, 00:12:18.081 "memory_domains": [ 00:12:18.081 { 00:12:18.081 "dma_device_id": "system", 00:12:18.081 "dma_device_type": 1 00:12:18.081 }, 00:12:18.081 { 00:12:18.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.081 "dma_device_type": 2 00:12:18.081 } 00:12:18.081 ], 00:12:18.081 "driver_specific": {} 00:12:18.081 } 00:12:18.081 ] 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.081 [2024-11-18 13:28:47.956599] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.081 [2024-11-18 13:28:47.956652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.081 [2024-11-18 13:28:47.956674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.081 [2024-11-18 13:28:47.958785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.081 [2024-11-18 13:28:47.958838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.081 13:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.081 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.081 "name": "Existed_Raid", 00:12:18.081 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:18.081 "strip_size_kb": 0, 00:12:18.081 "state": "configuring", 00:12:18.081 "raid_level": "raid1", 00:12:18.081 "superblock": true, 00:12:18.081 "num_base_bdevs": 4, 00:12:18.081 "num_base_bdevs_discovered": 3, 00:12:18.081 "num_base_bdevs_operational": 4, 00:12:18.081 "base_bdevs_list": [ 00:12:18.081 { 00:12:18.081 "name": "BaseBdev1", 00:12:18.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.081 "is_configured": false, 00:12:18.081 "data_offset": 0, 00:12:18.081 "data_size": 0 00:12:18.081 }, 00:12:18.081 { 00:12:18.081 "name": "BaseBdev2", 00:12:18.081 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:18.081 "is_configured": true, 00:12:18.081 "data_offset": 2048, 00:12:18.081 "data_size": 63488 00:12:18.081 }, 00:12:18.081 { 00:12:18.081 "name": "BaseBdev3", 00:12:18.081 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:18.081 "is_configured": true, 00:12:18.081 "data_offset": 2048, 00:12:18.081 "data_size": 63488 00:12:18.081 }, 00:12:18.081 { 00:12:18.081 "name": "BaseBdev4", 00:12:18.081 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:18.081 "is_configured": true, 00:12:18.081 "data_offset": 2048, 00:12:18.081 "data_size": 63488 00:12:18.081 } 00:12:18.081 ] 00:12:18.081 }' 00:12:18.081 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.081 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.649 [2024-11-18 13:28:48.427822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.649 "name": "Existed_Raid", 00:12:18.649 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:18.649 "strip_size_kb": 0, 00:12:18.649 "state": "configuring", 00:12:18.649 "raid_level": "raid1", 00:12:18.649 "superblock": true, 00:12:18.649 "num_base_bdevs": 4, 00:12:18.649 "num_base_bdevs_discovered": 2, 00:12:18.649 "num_base_bdevs_operational": 4, 00:12:18.649 "base_bdevs_list": [ 00:12:18.649 { 00:12:18.649 "name": "BaseBdev1", 00:12:18.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.649 "is_configured": false, 00:12:18.649 "data_offset": 0, 00:12:18.649 "data_size": 0 00:12:18.649 }, 00:12:18.649 { 00:12:18.649 "name": null, 00:12:18.649 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:18.649 "is_configured": false, 00:12:18.649 "data_offset": 0, 00:12:18.649 "data_size": 63488 00:12:18.649 }, 00:12:18.649 { 00:12:18.649 "name": "BaseBdev3", 00:12:18.649 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:18.649 "is_configured": true, 00:12:18.649 "data_offset": 2048, 00:12:18.649 "data_size": 63488 00:12:18.649 }, 00:12:18.649 { 00:12:18.649 "name": "BaseBdev4", 00:12:18.649 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:18.649 "is_configured": true, 00:12:18.649 "data_offset": 2048, 00:12:18.649 "data_size": 63488 00:12:18.649 } 00:12:18.649 ] 00:12:18.649 }' 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.649 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.908 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.908 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.908 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.908 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.909 [2024-11-18 13:28:48.936680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.909 BaseBdev1 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.909 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.173 [ 00:12:19.173 { 00:12:19.173 "name": "BaseBdev1", 00:12:19.173 "aliases": [ 00:12:19.173 "d6adbcee-313f-4675-9d00-4055197efb90" 00:12:19.173 ], 00:12:19.173 "product_name": "Malloc disk", 00:12:19.173 "block_size": 512, 00:12:19.173 "num_blocks": 65536, 00:12:19.173 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:19.173 "assigned_rate_limits": { 00:12:19.173 "rw_ios_per_sec": 0, 00:12:19.173 "rw_mbytes_per_sec": 0, 00:12:19.173 "r_mbytes_per_sec": 0, 00:12:19.173 "w_mbytes_per_sec": 0 00:12:19.173 }, 00:12:19.173 "claimed": true, 00:12:19.173 "claim_type": "exclusive_write", 00:12:19.173 "zoned": false, 00:12:19.173 "supported_io_types": { 00:12:19.173 "read": true, 00:12:19.173 "write": true, 00:12:19.173 "unmap": true, 00:12:19.173 "flush": true, 00:12:19.173 "reset": true, 00:12:19.173 "nvme_admin": false, 00:12:19.173 "nvme_io": false, 00:12:19.173 "nvme_io_md": false, 00:12:19.173 "write_zeroes": true, 00:12:19.173 "zcopy": true, 00:12:19.173 "get_zone_info": false, 00:12:19.173 "zone_management": false, 00:12:19.173 "zone_append": false, 00:12:19.173 "compare": false, 00:12:19.173 "compare_and_write": false, 00:12:19.173 "abort": true, 00:12:19.173 "seek_hole": false, 00:12:19.173 "seek_data": false, 00:12:19.173 "copy": true, 00:12:19.173 "nvme_iov_md": false 00:12:19.173 }, 00:12:19.173 "memory_domains": [ 00:12:19.173 { 00:12:19.173 "dma_device_id": "system", 00:12:19.173 "dma_device_type": 1 00:12:19.173 }, 00:12:19.173 { 00:12:19.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.173 "dma_device_type": 2 00:12:19.173 } 00:12:19.173 ], 00:12:19.173 "driver_specific": {} 00:12:19.173 } 00:12:19.173 ] 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.173 13:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.173 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.173 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.173 "name": "Existed_Raid", 00:12:19.173 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:19.173 "strip_size_kb": 0, 00:12:19.173 "state": "configuring", 00:12:19.173 "raid_level": "raid1", 00:12:19.173 "superblock": true, 00:12:19.173 "num_base_bdevs": 4, 00:12:19.173 "num_base_bdevs_discovered": 3, 00:12:19.173 "num_base_bdevs_operational": 4, 00:12:19.173 "base_bdevs_list": [ 00:12:19.173 { 00:12:19.173 "name": "BaseBdev1", 00:12:19.173 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:19.173 "is_configured": true, 00:12:19.173 "data_offset": 2048, 00:12:19.173 "data_size": 63488 00:12:19.173 }, 00:12:19.173 { 00:12:19.173 "name": null, 00:12:19.173 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:19.173 "is_configured": false, 00:12:19.173 "data_offset": 0, 00:12:19.173 "data_size": 63488 00:12:19.173 }, 00:12:19.173 { 00:12:19.173 "name": "BaseBdev3", 00:12:19.173 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:19.173 "is_configured": true, 00:12:19.173 "data_offset": 2048, 00:12:19.173 "data_size": 63488 00:12:19.173 }, 00:12:19.173 { 00:12:19.173 "name": "BaseBdev4", 00:12:19.173 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:19.173 "is_configured": true, 00:12:19.173 "data_offset": 2048, 00:12:19.173 "data_size": 63488 00:12:19.173 } 00:12:19.173 ] 00:12:19.173 }' 00:12:19.173 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.173 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.436 [2024-11-18 13:28:49.403953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.436 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.436 "name": "Existed_Raid", 00:12:19.436 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:19.436 "strip_size_kb": 0, 00:12:19.436 "state": "configuring", 00:12:19.436 "raid_level": "raid1", 00:12:19.436 "superblock": true, 00:12:19.436 "num_base_bdevs": 4, 00:12:19.436 "num_base_bdevs_discovered": 2, 00:12:19.436 "num_base_bdevs_operational": 4, 00:12:19.436 "base_bdevs_list": [ 00:12:19.436 { 00:12:19.436 "name": "BaseBdev1", 00:12:19.436 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:19.436 "is_configured": true, 00:12:19.436 "data_offset": 2048, 00:12:19.436 "data_size": 63488 00:12:19.436 }, 00:12:19.436 { 00:12:19.436 "name": null, 00:12:19.436 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:19.436 "is_configured": false, 00:12:19.436 "data_offset": 0, 00:12:19.436 "data_size": 63488 00:12:19.436 }, 00:12:19.436 { 00:12:19.436 "name": null, 00:12:19.436 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:19.436 "is_configured": false, 00:12:19.436 "data_offset": 0, 00:12:19.437 "data_size": 63488 00:12:19.437 }, 00:12:19.437 { 00:12:19.437 "name": "BaseBdev4", 00:12:19.437 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:19.437 "is_configured": true, 00:12:19.437 "data_offset": 2048, 00:12:19.437 "data_size": 63488 00:12:19.437 } 00:12:19.437 ] 00:12:19.437 }' 00:12:19.437 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.437 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.004 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.005 [2024-11-18 13:28:49.855227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.005 "name": "Existed_Raid", 00:12:20.005 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:20.005 "strip_size_kb": 0, 00:12:20.005 "state": "configuring", 00:12:20.005 "raid_level": "raid1", 00:12:20.005 "superblock": true, 00:12:20.005 "num_base_bdevs": 4, 00:12:20.005 "num_base_bdevs_discovered": 3, 00:12:20.005 "num_base_bdevs_operational": 4, 00:12:20.005 "base_bdevs_list": [ 00:12:20.005 { 00:12:20.005 "name": "BaseBdev1", 00:12:20.005 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:20.005 "is_configured": true, 00:12:20.005 "data_offset": 2048, 00:12:20.005 "data_size": 63488 00:12:20.005 }, 00:12:20.005 { 00:12:20.005 "name": null, 00:12:20.005 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:20.005 "is_configured": false, 00:12:20.005 "data_offset": 0, 00:12:20.005 "data_size": 63488 00:12:20.005 }, 00:12:20.005 { 00:12:20.005 "name": "BaseBdev3", 00:12:20.005 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:20.005 "is_configured": true, 00:12:20.005 "data_offset": 2048, 00:12:20.005 "data_size": 63488 00:12:20.005 }, 00:12:20.005 { 00:12:20.005 "name": "BaseBdev4", 00:12:20.005 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:20.005 "is_configured": true, 00:12:20.005 "data_offset": 2048, 00:12:20.005 "data_size": 63488 00:12:20.005 } 00:12:20.005 ] 00:12:20.005 }' 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.005 13:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 [2024-11-18 13:28:50.370386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.573 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.573 "name": "Existed_Raid", 00:12:20.573 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:20.573 "strip_size_kb": 0, 00:12:20.573 "state": "configuring", 00:12:20.573 "raid_level": "raid1", 00:12:20.573 "superblock": true, 00:12:20.573 "num_base_bdevs": 4, 00:12:20.573 "num_base_bdevs_discovered": 2, 00:12:20.573 "num_base_bdevs_operational": 4, 00:12:20.573 "base_bdevs_list": [ 00:12:20.573 { 00:12:20.573 "name": null, 00:12:20.573 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:20.573 "is_configured": false, 00:12:20.573 "data_offset": 0, 00:12:20.573 "data_size": 63488 00:12:20.573 }, 00:12:20.573 { 00:12:20.573 "name": null, 00:12:20.573 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:20.573 "is_configured": false, 00:12:20.573 "data_offset": 0, 00:12:20.574 "data_size": 63488 00:12:20.574 }, 00:12:20.574 { 00:12:20.574 "name": "BaseBdev3", 00:12:20.574 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:20.574 "is_configured": true, 00:12:20.574 "data_offset": 2048, 00:12:20.574 "data_size": 63488 00:12:20.574 }, 00:12:20.574 { 00:12:20.574 "name": "BaseBdev4", 00:12:20.574 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:20.574 "is_configured": true, 00:12:20.574 "data_offset": 2048, 00:12:20.574 "data_size": 63488 00:12:20.574 } 00:12:20.574 ] 00:12:20.574 }' 00:12:20.574 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.574 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 [2024-11-18 13:28:50.949733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 13:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.144 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.144 "name": "Existed_Raid", 00:12:21.144 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:21.144 "strip_size_kb": 0, 00:12:21.144 "state": "configuring", 00:12:21.144 "raid_level": "raid1", 00:12:21.144 "superblock": true, 00:12:21.144 "num_base_bdevs": 4, 00:12:21.144 "num_base_bdevs_discovered": 3, 00:12:21.144 "num_base_bdevs_operational": 4, 00:12:21.144 "base_bdevs_list": [ 00:12:21.144 { 00:12:21.144 "name": null, 00:12:21.144 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:21.144 "is_configured": false, 00:12:21.144 "data_offset": 0, 00:12:21.144 "data_size": 63488 00:12:21.144 }, 00:12:21.144 { 00:12:21.144 "name": "BaseBdev2", 00:12:21.144 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:21.144 "is_configured": true, 00:12:21.144 "data_offset": 2048, 00:12:21.144 "data_size": 63488 00:12:21.144 }, 00:12:21.144 { 00:12:21.144 "name": "BaseBdev3", 00:12:21.144 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:21.144 "is_configured": true, 00:12:21.144 "data_offset": 2048, 00:12:21.144 "data_size": 63488 00:12:21.144 }, 00:12:21.144 { 00:12:21.144 "name": "BaseBdev4", 00:12:21.144 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:21.144 "is_configured": true, 00:12:21.144 "data_offset": 2048, 00:12:21.144 "data_size": 63488 00:12:21.144 } 00:12:21.144 ] 00:12:21.144 }' 00:12:21.144 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.144 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.403 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6adbcee-313f-4675-9d00-4055197efb90 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.662 [2024-11-18 13:28:51.523735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.662 [2024-11-18 13:28:51.524000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.662 [2024-11-18 13:28:51.524016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.662 [2024-11-18 13:28:51.524351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:21.662 NewBaseBdev 00:12:21.662 [2024-11-18 13:28:51.524538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.662 [2024-11-18 13:28:51.524555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.662 [2024-11-18 13:28:51.524709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.662 [ 00:12:21.662 { 00:12:21.662 "name": "NewBaseBdev", 00:12:21.662 "aliases": [ 00:12:21.662 "d6adbcee-313f-4675-9d00-4055197efb90" 00:12:21.662 ], 00:12:21.662 "product_name": "Malloc disk", 00:12:21.662 "block_size": 512, 00:12:21.662 "num_blocks": 65536, 00:12:21.662 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:21.662 "assigned_rate_limits": { 00:12:21.662 "rw_ios_per_sec": 0, 00:12:21.662 "rw_mbytes_per_sec": 0, 00:12:21.662 "r_mbytes_per_sec": 0, 00:12:21.662 "w_mbytes_per_sec": 0 00:12:21.662 }, 00:12:21.662 "claimed": true, 00:12:21.662 "claim_type": "exclusive_write", 00:12:21.662 "zoned": false, 00:12:21.662 "supported_io_types": { 00:12:21.662 "read": true, 00:12:21.662 "write": true, 00:12:21.662 "unmap": true, 00:12:21.662 "flush": true, 00:12:21.662 "reset": true, 00:12:21.662 "nvme_admin": false, 00:12:21.662 "nvme_io": false, 00:12:21.662 "nvme_io_md": false, 00:12:21.662 "write_zeroes": true, 00:12:21.662 "zcopy": true, 00:12:21.662 "get_zone_info": false, 00:12:21.662 "zone_management": false, 00:12:21.662 "zone_append": false, 00:12:21.662 "compare": false, 00:12:21.662 "compare_and_write": false, 00:12:21.662 "abort": true, 00:12:21.662 "seek_hole": false, 00:12:21.662 "seek_data": false, 00:12:21.662 "copy": true, 00:12:21.662 "nvme_iov_md": false 00:12:21.662 }, 00:12:21.662 "memory_domains": [ 00:12:21.662 { 00:12:21.662 "dma_device_id": "system", 00:12:21.662 "dma_device_type": 1 00:12:21.662 }, 00:12:21.662 { 00:12:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.662 "dma_device_type": 2 00:12:21.662 } 00:12:21.662 ], 00:12:21.662 "driver_specific": {} 00:12:21.662 } 00:12:21.662 ] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.662 "name": "Existed_Raid", 00:12:21.662 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:21.662 "strip_size_kb": 0, 00:12:21.662 "state": "online", 00:12:21.662 "raid_level": "raid1", 00:12:21.662 "superblock": true, 00:12:21.662 "num_base_bdevs": 4, 00:12:21.662 "num_base_bdevs_discovered": 4, 00:12:21.662 "num_base_bdevs_operational": 4, 00:12:21.662 "base_bdevs_list": [ 00:12:21.662 { 00:12:21.662 "name": "NewBaseBdev", 00:12:21.662 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:21.662 "is_configured": true, 00:12:21.662 "data_offset": 2048, 00:12:21.662 "data_size": 63488 00:12:21.662 }, 00:12:21.662 { 00:12:21.662 "name": "BaseBdev2", 00:12:21.662 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:21.662 "is_configured": true, 00:12:21.662 "data_offset": 2048, 00:12:21.662 "data_size": 63488 00:12:21.662 }, 00:12:21.662 { 00:12:21.662 "name": "BaseBdev3", 00:12:21.662 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:21.662 "is_configured": true, 00:12:21.662 "data_offset": 2048, 00:12:21.662 "data_size": 63488 00:12:21.662 }, 00:12:21.662 { 00:12:21.662 "name": "BaseBdev4", 00:12:21.662 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:21.662 "is_configured": true, 00:12:21.662 "data_offset": 2048, 00:12:21.662 "data_size": 63488 00:12:21.662 } 00:12:21.662 ] 00:12:21.662 }' 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.662 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.232 13:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.232 [2024-11-18 13:28:51.991395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.232 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.232 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.232 "name": "Existed_Raid", 00:12:22.232 "aliases": [ 00:12:22.232 "13a69a6c-645c-4089-8ce1-da3fb2dc849b" 00:12:22.232 ], 00:12:22.232 "product_name": "Raid Volume", 00:12:22.232 "block_size": 512, 00:12:22.232 "num_blocks": 63488, 00:12:22.232 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:22.232 "assigned_rate_limits": { 00:12:22.232 "rw_ios_per_sec": 0, 00:12:22.232 "rw_mbytes_per_sec": 0, 00:12:22.233 "r_mbytes_per_sec": 0, 00:12:22.233 "w_mbytes_per_sec": 0 00:12:22.233 }, 00:12:22.233 "claimed": false, 00:12:22.233 "zoned": false, 00:12:22.233 "supported_io_types": { 00:12:22.233 "read": true, 00:12:22.233 "write": true, 00:12:22.233 "unmap": false, 00:12:22.233 "flush": false, 00:12:22.233 "reset": true, 00:12:22.233 "nvme_admin": false, 00:12:22.233 "nvme_io": false, 00:12:22.233 "nvme_io_md": false, 00:12:22.233 "write_zeroes": true, 00:12:22.233 "zcopy": false, 00:12:22.233 "get_zone_info": false, 00:12:22.233 "zone_management": false, 00:12:22.233 "zone_append": false, 00:12:22.233 "compare": false, 00:12:22.233 "compare_and_write": false, 00:12:22.233 "abort": false, 00:12:22.233 "seek_hole": false, 00:12:22.233 "seek_data": false, 00:12:22.233 "copy": false, 00:12:22.233 "nvme_iov_md": false 00:12:22.233 }, 00:12:22.233 "memory_domains": [ 00:12:22.233 { 00:12:22.233 "dma_device_id": "system", 00:12:22.233 "dma_device_type": 1 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.233 "dma_device_type": 2 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "system", 00:12:22.233 "dma_device_type": 1 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.233 "dma_device_type": 2 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "system", 00:12:22.233 "dma_device_type": 1 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.233 "dma_device_type": 2 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "system", 00:12:22.233 "dma_device_type": 1 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.233 "dma_device_type": 2 00:12:22.233 } 00:12:22.233 ], 00:12:22.233 "driver_specific": { 00:12:22.233 "raid": { 00:12:22.233 "uuid": "13a69a6c-645c-4089-8ce1-da3fb2dc849b", 00:12:22.233 "strip_size_kb": 0, 00:12:22.233 "state": "online", 00:12:22.233 "raid_level": "raid1", 00:12:22.233 "superblock": true, 00:12:22.233 "num_base_bdevs": 4, 00:12:22.233 "num_base_bdevs_discovered": 4, 00:12:22.233 "num_base_bdevs_operational": 4, 00:12:22.233 "base_bdevs_list": [ 00:12:22.233 { 00:12:22.233 "name": "NewBaseBdev", 00:12:22.233 "uuid": "d6adbcee-313f-4675-9d00-4055197efb90", 00:12:22.233 "is_configured": true, 00:12:22.233 "data_offset": 2048, 00:12:22.233 "data_size": 63488 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "name": "BaseBdev2", 00:12:22.233 "uuid": "74b79eb6-d2df-480f-9c3f-878617283d2a", 00:12:22.233 "is_configured": true, 00:12:22.233 "data_offset": 2048, 00:12:22.233 "data_size": 63488 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "name": "BaseBdev3", 00:12:22.233 "uuid": "ce018cd8-5fba-415c-a9f9-65dddde63385", 00:12:22.233 "is_configured": true, 00:12:22.233 "data_offset": 2048, 00:12:22.233 "data_size": 63488 00:12:22.233 }, 00:12:22.233 { 00:12:22.233 "name": "BaseBdev4", 00:12:22.233 "uuid": "be13b1db-e58e-4be0-85f8-a33b31a0d64f", 00:12:22.233 "is_configured": true, 00:12:22.233 "data_offset": 2048, 00:12:22.233 "data_size": 63488 00:12:22.233 } 00:12:22.233 ] 00:12:22.233 } 00:12:22.233 } 00:12:22.233 }' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:22.233 BaseBdev2 00:12:22.233 BaseBdev3 00:12:22.233 BaseBdev4' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.233 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.493 [2024-11-18 13:28:52.322539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.493 [2024-11-18 13:28:52.322579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.493 [2024-11-18 13:28:52.322716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.493 [2024-11-18 13:28:52.323081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.493 [2024-11-18 13:28:52.323104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73855 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73855 ']' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73855 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73855 00:12:22.493 killing process with pid 73855 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73855' 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73855 00:12:22.493 [2024-11-18 13:28:52.371957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.493 13:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73855 00:12:22.753 [2024-11-18 13:28:52.799546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.134 13:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.134 00:12:24.134 real 0m11.869s 00:12:24.134 user 0m18.530s 00:12:24.134 sys 0m2.286s 00:12:24.134 13:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.134 13:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.134 ************************************ 00:12:24.134 END TEST raid_state_function_test_sb 00:12:24.134 ************************************ 00:12:24.134 13:28:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:24.134 13:28:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.134 13:28:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.134 13:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.134 ************************************ 00:12:24.134 START TEST raid_superblock_test 00:12:24.134 ************************************ 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74531 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74531 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74531 ']' 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.134 13:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.394 [2024-11-18 13:28:54.193869] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:24.394 [2024-11-18 13:28:54.194036] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74531 ] 00:12:24.394 [2024-11-18 13:28:54.374039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.654 [2024-11-18 13:28:54.515143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.914 [2024-11-18 13:28:54.753472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.914 [2024-11-18 13:28:54.753545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 malloc1 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 [2024-11-18 13:28:55.096036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.174 [2024-11-18 13:28:55.096124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.174 [2024-11-18 13:28:55.096160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.174 [2024-11-18 13:28:55.096171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.174 [2024-11-18 13:28:55.098565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.174 [2024-11-18 13:28:55.098611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.174 pt1 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 malloc2 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 [2024-11-18 13:28:55.156859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.174 [2024-11-18 13:28:55.156917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.174 [2024-11-18 13:28:55.156939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.174 [2024-11-18 13:28:55.156948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.174 [2024-11-18 13:28:55.159328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.174 [2024-11-18 13:28:55.159363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.174 pt2 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 malloc3 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 [2024-11-18 13:28:55.225269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.174 [2024-11-18 13:28:55.225326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.174 [2024-11-18 13:28:55.225349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.174 [2024-11-18 13:28:55.225358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.434 [2024-11-18 13:28:55.227797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.434 [2024-11-18 13:28:55.227832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.434 pt3 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.434 malloc4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.434 [2024-11-18 13:28:55.287224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.434 [2024-11-18 13:28:55.287279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.434 [2024-11-18 13:28:55.287300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.434 [2024-11-18 13:28:55.287309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.434 [2024-11-18 13:28:55.289680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.434 [2024-11-18 13:28:55.289713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.434 pt4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.434 [2024-11-18 13:28:55.299238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.434 [2024-11-18 13:28:55.301273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.434 [2024-11-18 13:28:55.301335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.434 [2024-11-18 13:28:55.301375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.434 [2024-11-18 13:28:55.301568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.434 [2024-11-18 13:28:55.301591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.434 [2024-11-18 13:28:55.301861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.434 [2024-11-18 13:28:55.302046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.434 [2024-11-18 13:28:55.302069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.434 [2024-11-18 13:28:55.302223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.434 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.435 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.435 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.435 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.435 "name": "raid_bdev1", 00:12:25.435 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:25.435 "strip_size_kb": 0, 00:12:25.435 "state": "online", 00:12:25.435 "raid_level": "raid1", 00:12:25.435 "superblock": true, 00:12:25.435 "num_base_bdevs": 4, 00:12:25.435 "num_base_bdevs_discovered": 4, 00:12:25.435 "num_base_bdevs_operational": 4, 00:12:25.435 "base_bdevs_list": [ 00:12:25.435 { 00:12:25.435 "name": "pt1", 00:12:25.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.435 "is_configured": true, 00:12:25.435 "data_offset": 2048, 00:12:25.435 "data_size": 63488 00:12:25.435 }, 00:12:25.435 { 00:12:25.435 "name": "pt2", 00:12:25.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.435 "is_configured": true, 00:12:25.435 "data_offset": 2048, 00:12:25.435 "data_size": 63488 00:12:25.435 }, 00:12:25.435 { 00:12:25.435 "name": "pt3", 00:12:25.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.435 "is_configured": true, 00:12:25.435 "data_offset": 2048, 00:12:25.435 "data_size": 63488 00:12:25.435 }, 00:12:25.435 { 00:12:25.435 "name": "pt4", 00:12:25.435 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.435 "is_configured": true, 00:12:25.435 "data_offset": 2048, 00:12:25.435 "data_size": 63488 00:12:25.435 } 00:12:25.435 ] 00:12:25.435 }' 00:12:25.435 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.435 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.695 [2024-11-18 13:28:55.718894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.695 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.954 "name": "raid_bdev1", 00:12:25.954 "aliases": [ 00:12:25.954 "188076c4-d7b3-4346-884b-5fa3469a5747" 00:12:25.954 ], 00:12:25.954 "product_name": "Raid Volume", 00:12:25.954 "block_size": 512, 00:12:25.954 "num_blocks": 63488, 00:12:25.954 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:25.954 "assigned_rate_limits": { 00:12:25.954 "rw_ios_per_sec": 0, 00:12:25.954 "rw_mbytes_per_sec": 0, 00:12:25.954 "r_mbytes_per_sec": 0, 00:12:25.954 "w_mbytes_per_sec": 0 00:12:25.954 }, 00:12:25.954 "claimed": false, 00:12:25.954 "zoned": false, 00:12:25.954 "supported_io_types": { 00:12:25.954 "read": true, 00:12:25.954 "write": true, 00:12:25.954 "unmap": false, 00:12:25.954 "flush": false, 00:12:25.954 "reset": true, 00:12:25.954 "nvme_admin": false, 00:12:25.954 "nvme_io": false, 00:12:25.954 "nvme_io_md": false, 00:12:25.954 "write_zeroes": true, 00:12:25.954 "zcopy": false, 00:12:25.954 "get_zone_info": false, 00:12:25.954 "zone_management": false, 00:12:25.954 "zone_append": false, 00:12:25.954 "compare": false, 00:12:25.954 "compare_and_write": false, 00:12:25.954 "abort": false, 00:12:25.954 "seek_hole": false, 00:12:25.954 "seek_data": false, 00:12:25.954 "copy": false, 00:12:25.954 "nvme_iov_md": false 00:12:25.954 }, 00:12:25.954 "memory_domains": [ 00:12:25.954 { 00:12:25.954 "dma_device_id": "system", 00:12:25.954 "dma_device_type": 1 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.954 "dma_device_type": 2 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "system", 00:12:25.954 "dma_device_type": 1 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.954 "dma_device_type": 2 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "system", 00:12:25.954 "dma_device_type": 1 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.954 "dma_device_type": 2 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "system", 00:12:25.954 "dma_device_type": 1 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.954 "dma_device_type": 2 00:12:25.954 } 00:12:25.954 ], 00:12:25.954 "driver_specific": { 00:12:25.954 "raid": { 00:12:25.954 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:25.954 "strip_size_kb": 0, 00:12:25.954 "state": "online", 00:12:25.954 "raid_level": "raid1", 00:12:25.954 "superblock": true, 00:12:25.954 "num_base_bdevs": 4, 00:12:25.954 "num_base_bdevs_discovered": 4, 00:12:25.954 "num_base_bdevs_operational": 4, 00:12:25.954 "base_bdevs_list": [ 00:12:25.954 { 00:12:25.954 "name": "pt1", 00:12:25.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.954 "is_configured": true, 00:12:25.954 "data_offset": 2048, 00:12:25.954 "data_size": 63488 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "name": "pt2", 00:12:25.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.954 "is_configured": true, 00:12:25.954 "data_offset": 2048, 00:12:25.954 "data_size": 63488 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "name": "pt3", 00:12:25.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.954 "is_configured": true, 00:12:25.954 "data_offset": 2048, 00:12:25.954 "data_size": 63488 00:12:25.954 }, 00:12:25.954 { 00:12:25.954 "name": "pt4", 00:12:25.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.954 "is_configured": true, 00:12:25.954 "data_offset": 2048, 00:12:25.954 "data_size": 63488 00:12:25.954 } 00:12:25.954 ] 00:12:25.954 } 00:12:25.954 } 00:12:25.954 }' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.954 pt2 00:12:25.954 pt3 00:12:25.954 pt4' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.954 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.955 13:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:25.955 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.955 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.214 [2024-11-18 13:28:56.006359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=188076c4-d7b3-4346-884b-5fa3469a5747 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 188076c4-d7b3-4346-884b-5fa3469a5747 ']' 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.214 [2024-11-18 13:28:56.053936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.214 [2024-11-18 13:28:56.053970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.214 [2024-11-18 13:28:56.054088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.214 [2024-11-18 13:28:56.054197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.214 [2024-11-18 13:28:56.054216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.214 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 [2024-11-18 13:28:56.221629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.215 [2024-11-18 13:28:56.223958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.215 [2024-11-18 13:28:56.224016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:26.215 [2024-11-18 13:28:56.224050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:26.215 [2024-11-18 13:28:56.224103] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.215 [2024-11-18 13:28:56.224173] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.215 [2024-11-18 13:28:56.224193] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:26.215 [2024-11-18 13:28:56.224213] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:26.215 [2024-11-18 13:28:56.224227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.215 [2024-11-18 13:28:56.224238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.215 request: 00:12:26.215 { 00:12:26.215 "name": "raid_bdev1", 00:12:26.215 "raid_level": "raid1", 00:12:26.215 "base_bdevs": [ 00:12:26.215 "malloc1", 00:12:26.215 "malloc2", 00:12:26.215 "malloc3", 00:12:26.215 "malloc4" 00:12:26.215 ], 00:12:26.215 "superblock": false, 00:12:26.215 "method": "bdev_raid_create", 00:12:26.215 "req_id": 1 00:12:26.215 } 00:12:26.215 Got JSON-RPC error response 00:12:26.215 response: 00:12:26.215 { 00:12:26.215 "code": -17, 00:12:26.215 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.215 } 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.215 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.475 [2024-11-18 13:28:56.277493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.475 [2024-11-18 13:28:56.277548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.475 [2024-11-18 13:28:56.277565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.475 [2024-11-18 13:28:56.277576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.475 [2024-11-18 13:28:56.280163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.475 [2024-11-18 13:28:56.280201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.475 [2024-11-18 13:28:56.280280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:26.475 [2024-11-18 13:28:56.280338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.475 pt1 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.475 "name": "raid_bdev1", 00:12:26.475 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:26.475 "strip_size_kb": 0, 00:12:26.475 "state": "configuring", 00:12:26.475 "raid_level": "raid1", 00:12:26.475 "superblock": true, 00:12:26.475 "num_base_bdevs": 4, 00:12:26.475 "num_base_bdevs_discovered": 1, 00:12:26.475 "num_base_bdevs_operational": 4, 00:12:26.475 "base_bdevs_list": [ 00:12:26.475 { 00:12:26.475 "name": "pt1", 00:12:26.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.475 "is_configured": true, 00:12:26.475 "data_offset": 2048, 00:12:26.475 "data_size": 63488 00:12:26.475 }, 00:12:26.475 { 00:12:26.475 "name": null, 00:12:26.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.475 "is_configured": false, 00:12:26.475 "data_offset": 2048, 00:12:26.475 "data_size": 63488 00:12:26.475 }, 00:12:26.475 { 00:12:26.475 "name": null, 00:12:26.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.475 "is_configured": false, 00:12:26.475 "data_offset": 2048, 00:12:26.475 "data_size": 63488 00:12:26.475 }, 00:12:26.475 { 00:12:26.475 "name": null, 00:12:26.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.475 "is_configured": false, 00:12:26.475 "data_offset": 2048, 00:12:26.475 "data_size": 63488 00:12:26.475 } 00:12:26.475 ] 00:12:26.475 }' 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.475 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.734 [2024-11-18 13:28:56.760739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.734 [2024-11-18 13:28:56.760825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.734 [2024-11-18 13:28:56.760860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:26.734 [2024-11-18 13:28:56.760874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.734 [2024-11-18 13:28:56.761453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.734 [2024-11-18 13:28:56.761482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.734 [2024-11-18 13:28:56.761583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.734 [2024-11-18 13:28:56.761625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.734 pt2 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.734 [2024-11-18 13:28:56.772695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.734 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.735 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.994 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.994 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.994 "name": "raid_bdev1", 00:12:26.994 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:26.994 "strip_size_kb": 0, 00:12:26.994 "state": "configuring", 00:12:26.994 "raid_level": "raid1", 00:12:26.994 "superblock": true, 00:12:26.994 "num_base_bdevs": 4, 00:12:26.994 "num_base_bdevs_discovered": 1, 00:12:26.994 "num_base_bdevs_operational": 4, 00:12:26.994 "base_bdevs_list": [ 00:12:26.994 { 00:12:26.994 "name": "pt1", 00:12:26.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.994 "is_configured": true, 00:12:26.994 "data_offset": 2048, 00:12:26.994 "data_size": 63488 00:12:26.994 }, 00:12:26.994 { 00:12:26.994 "name": null, 00:12:26.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.994 "is_configured": false, 00:12:26.994 "data_offset": 0, 00:12:26.994 "data_size": 63488 00:12:26.994 }, 00:12:26.994 { 00:12:26.994 "name": null, 00:12:26.994 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.994 "is_configured": false, 00:12:26.994 "data_offset": 2048, 00:12:26.994 "data_size": 63488 00:12:26.995 }, 00:12:26.995 { 00:12:26.995 "name": null, 00:12:26.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.995 "is_configured": false, 00:12:26.995 "data_offset": 2048, 00:12:26.995 "data_size": 63488 00:12:26.995 } 00:12:26.995 ] 00:12:26.995 }' 00:12:26.995 13:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.995 13:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.255 [2024-11-18 13:28:57.195978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.255 [2024-11-18 13:28:57.196059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.255 [2024-11-18 13:28:57.196091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:27.255 [2024-11-18 13:28:57.196104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.255 [2024-11-18 13:28:57.196690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.255 [2024-11-18 13:28:57.196715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.255 [2024-11-18 13:28:57.196815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.255 [2024-11-18 13:28:57.196847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.255 pt2 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.255 [2024-11-18 13:28:57.207912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.255 [2024-11-18 13:28:57.207966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.255 [2024-11-18 13:28:57.207984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:27.255 [2024-11-18 13:28:57.207992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.255 [2024-11-18 13:28:57.208405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.255 [2024-11-18 13:28:57.208426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.255 [2024-11-18 13:28:57.208494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.255 [2024-11-18 13:28:57.208513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.255 pt3 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.255 [2024-11-18 13:28:57.219858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:27.255 [2024-11-18 13:28:57.219903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.255 [2024-11-18 13:28:57.219920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:27.255 [2024-11-18 13:28:57.219929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.255 [2024-11-18 13:28:57.220310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.255 [2024-11-18 13:28:57.220333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:27.255 [2024-11-18 13:28:57.220393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:27.255 [2024-11-18 13:28:57.220410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:27.255 [2024-11-18 13:28:57.220553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.255 [2024-11-18 13:28:57.220569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.255 [2024-11-18 13:28:57.220819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:27.255 [2024-11-18 13:28:57.220969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.255 [2024-11-18 13:28:57.220992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.255 [2024-11-18 13:28:57.221151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.255 pt4 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.255 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.255 "name": "raid_bdev1", 00:12:27.255 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:27.255 "strip_size_kb": 0, 00:12:27.255 "state": "online", 00:12:27.255 "raid_level": "raid1", 00:12:27.255 "superblock": true, 00:12:27.255 "num_base_bdevs": 4, 00:12:27.255 "num_base_bdevs_discovered": 4, 00:12:27.255 "num_base_bdevs_operational": 4, 00:12:27.255 "base_bdevs_list": [ 00:12:27.255 { 00:12:27.255 "name": "pt1", 00:12:27.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.255 "is_configured": true, 00:12:27.255 "data_offset": 2048, 00:12:27.255 "data_size": 63488 00:12:27.255 }, 00:12:27.255 { 00:12:27.255 "name": "pt2", 00:12:27.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.256 "is_configured": true, 00:12:27.256 "data_offset": 2048, 00:12:27.256 "data_size": 63488 00:12:27.256 }, 00:12:27.256 { 00:12:27.256 "name": "pt3", 00:12:27.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.256 "is_configured": true, 00:12:27.256 "data_offset": 2048, 00:12:27.256 "data_size": 63488 00:12:27.256 }, 00:12:27.256 { 00:12:27.256 "name": "pt4", 00:12:27.256 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.256 "is_configured": true, 00:12:27.256 "data_offset": 2048, 00:12:27.256 "data_size": 63488 00:12:27.256 } 00:12:27.256 ] 00:12:27.256 }' 00:12:27.256 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.256 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:27.824 [2024-11-18 13:28:57.715507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.824 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.824 "name": "raid_bdev1", 00:12:27.824 "aliases": [ 00:12:27.824 "188076c4-d7b3-4346-884b-5fa3469a5747" 00:12:27.824 ], 00:12:27.824 "product_name": "Raid Volume", 00:12:27.824 "block_size": 512, 00:12:27.824 "num_blocks": 63488, 00:12:27.824 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:27.824 "assigned_rate_limits": { 00:12:27.824 "rw_ios_per_sec": 0, 00:12:27.824 "rw_mbytes_per_sec": 0, 00:12:27.824 "r_mbytes_per_sec": 0, 00:12:27.824 "w_mbytes_per_sec": 0 00:12:27.824 }, 00:12:27.824 "claimed": false, 00:12:27.824 "zoned": false, 00:12:27.824 "supported_io_types": { 00:12:27.824 "read": true, 00:12:27.824 "write": true, 00:12:27.824 "unmap": false, 00:12:27.824 "flush": false, 00:12:27.824 "reset": true, 00:12:27.824 "nvme_admin": false, 00:12:27.824 "nvme_io": false, 00:12:27.824 "nvme_io_md": false, 00:12:27.824 "write_zeroes": true, 00:12:27.824 "zcopy": false, 00:12:27.824 "get_zone_info": false, 00:12:27.824 "zone_management": false, 00:12:27.824 "zone_append": false, 00:12:27.824 "compare": false, 00:12:27.824 "compare_and_write": false, 00:12:27.824 "abort": false, 00:12:27.824 "seek_hole": false, 00:12:27.824 "seek_data": false, 00:12:27.824 "copy": false, 00:12:27.824 "nvme_iov_md": false 00:12:27.824 }, 00:12:27.824 "memory_domains": [ 00:12:27.824 { 00:12:27.824 "dma_device_id": "system", 00:12:27.824 "dma_device_type": 1 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.824 "dma_device_type": 2 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "system", 00:12:27.824 "dma_device_type": 1 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.824 "dma_device_type": 2 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "system", 00:12:27.824 "dma_device_type": 1 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.824 "dma_device_type": 2 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "system", 00:12:27.824 "dma_device_type": 1 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.824 "dma_device_type": 2 00:12:27.824 } 00:12:27.824 ], 00:12:27.824 "driver_specific": { 00:12:27.824 "raid": { 00:12:27.824 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:27.824 "strip_size_kb": 0, 00:12:27.824 "state": "online", 00:12:27.824 "raid_level": "raid1", 00:12:27.824 "superblock": true, 00:12:27.824 "num_base_bdevs": 4, 00:12:27.824 "num_base_bdevs_discovered": 4, 00:12:27.824 "num_base_bdevs_operational": 4, 00:12:27.824 "base_bdevs_list": [ 00:12:27.824 { 00:12:27.824 "name": "pt1", 00:12:27.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.824 "is_configured": true, 00:12:27.824 "data_offset": 2048, 00:12:27.824 "data_size": 63488 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "name": "pt2", 00:12:27.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.824 "is_configured": true, 00:12:27.824 "data_offset": 2048, 00:12:27.824 "data_size": 63488 00:12:27.824 }, 00:12:27.824 { 00:12:27.824 "name": "pt3", 00:12:27.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.825 "is_configured": true, 00:12:27.825 "data_offset": 2048, 00:12:27.825 "data_size": 63488 00:12:27.825 }, 00:12:27.825 { 00:12:27.825 "name": "pt4", 00:12:27.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.825 "is_configured": true, 00:12:27.825 "data_offset": 2048, 00:12:27.825 "data_size": 63488 00:12:27.825 } 00:12:27.825 ] 00:12:27.825 } 00:12:27.825 } 00:12:27.825 }' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.825 pt2 00:12:27.825 pt3 00:12:27.825 pt4' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.825 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.084 13:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.084 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:28.085 [2024-11-18 13:28:58.038831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 188076c4-d7b3-4346-884b-5fa3469a5747 '!=' 188076c4-d7b3-4346-884b-5fa3469a5747 ']' 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.085 [2024-11-18 13:28:58.086478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.085 "name": "raid_bdev1", 00:12:28.085 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:28.085 "strip_size_kb": 0, 00:12:28.085 "state": "online", 00:12:28.085 "raid_level": "raid1", 00:12:28.085 "superblock": true, 00:12:28.085 "num_base_bdevs": 4, 00:12:28.085 "num_base_bdevs_discovered": 3, 00:12:28.085 "num_base_bdevs_operational": 3, 00:12:28.085 "base_bdevs_list": [ 00:12:28.085 { 00:12:28.085 "name": null, 00:12:28.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.085 "is_configured": false, 00:12:28.085 "data_offset": 0, 00:12:28.085 "data_size": 63488 00:12:28.085 }, 00:12:28.085 { 00:12:28.085 "name": "pt2", 00:12:28.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.085 "is_configured": true, 00:12:28.085 "data_offset": 2048, 00:12:28.085 "data_size": 63488 00:12:28.085 }, 00:12:28.085 { 00:12:28.085 "name": "pt3", 00:12:28.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.085 "is_configured": true, 00:12:28.085 "data_offset": 2048, 00:12:28.085 "data_size": 63488 00:12:28.085 }, 00:12:28.085 { 00:12:28.085 "name": "pt4", 00:12:28.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.085 "is_configured": true, 00:12:28.085 "data_offset": 2048, 00:12:28.085 "data_size": 63488 00:12:28.085 } 00:12:28.085 ] 00:12:28.085 }' 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.085 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.653 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.653 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.653 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.653 [2024-11-18 13:28:58.501725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.653 [2024-11-18 13:28:58.501758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.653 [2024-11-18 13:28:58.501844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.653 [2024-11-18 13:28:58.501932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.654 [2024-11-18 13:28:58.501943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 [2024-11-18 13:28:58.597542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.654 [2024-11-18 13:28:58.597596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.654 [2024-11-18 13:28:58.597615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:28.654 [2024-11-18 13:28:58.597624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.654 [2024-11-18 13:28:58.600142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.654 [2024-11-18 13:28:58.600177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.654 [2024-11-18 13:28:58.600267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.654 [2024-11-18 13:28:58.600312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.654 pt2 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.654 "name": "raid_bdev1", 00:12:28.654 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:28.654 "strip_size_kb": 0, 00:12:28.654 "state": "configuring", 00:12:28.654 "raid_level": "raid1", 00:12:28.654 "superblock": true, 00:12:28.654 "num_base_bdevs": 4, 00:12:28.654 "num_base_bdevs_discovered": 1, 00:12:28.654 "num_base_bdevs_operational": 3, 00:12:28.654 "base_bdevs_list": [ 00:12:28.654 { 00:12:28.654 "name": null, 00:12:28.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.654 "is_configured": false, 00:12:28.654 "data_offset": 2048, 00:12:28.654 "data_size": 63488 00:12:28.654 }, 00:12:28.654 { 00:12:28.654 "name": "pt2", 00:12:28.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.654 "is_configured": true, 00:12:28.654 "data_offset": 2048, 00:12:28.654 "data_size": 63488 00:12:28.654 }, 00:12:28.654 { 00:12:28.654 "name": null, 00:12:28.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.654 "is_configured": false, 00:12:28.654 "data_offset": 2048, 00:12:28.654 "data_size": 63488 00:12:28.654 }, 00:12:28.654 { 00:12:28.654 "name": null, 00:12:28.654 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.654 "is_configured": false, 00:12:28.654 "data_offset": 2048, 00:12:28.654 "data_size": 63488 00:12:28.654 } 00:12:28.654 ] 00:12:28.654 }' 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.654 13:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.221 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.221 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.221 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.221 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 [2024-11-18 13:28:59.068805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.222 [2024-11-18 13:28:59.068883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.222 [2024-11-18 13:28:59.068910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:29.222 [2024-11-18 13:28:59.068919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.222 [2024-11-18 13:28:59.069498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.222 [2024-11-18 13:28:59.069523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.222 [2024-11-18 13:28:59.069626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:29.222 [2024-11-18 13:28:59.069652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.222 pt3 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.222 "name": "raid_bdev1", 00:12:29.222 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:29.222 "strip_size_kb": 0, 00:12:29.222 "state": "configuring", 00:12:29.222 "raid_level": "raid1", 00:12:29.222 "superblock": true, 00:12:29.222 "num_base_bdevs": 4, 00:12:29.222 "num_base_bdevs_discovered": 2, 00:12:29.222 "num_base_bdevs_operational": 3, 00:12:29.222 "base_bdevs_list": [ 00:12:29.222 { 00:12:29.222 "name": null, 00:12:29.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.222 "is_configured": false, 00:12:29.222 "data_offset": 2048, 00:12:29.222 "data_size": 63488 00:12:29.222 }, 00:12:29.222 { 00:12:29.222 "name": "pt2", 00:12:29.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.222 "is_configured": true, 00:12:29.222 "data_offset": 2048, 00:12:29.222 "data_size": 63488 00:12:29.222 }, 00:12:29.222 { 00:12:29.222 "name": "pt3", 00:12:29.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.222 "is_configured": true, 00:12:29.222 "data_offset": 2048, 00:12:29.222 "data_size": 63488 00:12:29.222 }, 00:12:29.222 { 00:12:29.222 "name": null, 00:12:29.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.222 "is_configured": false, 00:12:29.222 "data_offset": 2048, 00:12:29.222 "data_size": 63488 00:12:29.222 } 00:12:29.222 ] 00:12:29.222 }' 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.222 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.480 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.480 [2024-11-18 13:28:59.488075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:29.480 [2024-11-18 13:28:59.488160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.480 [2024-11-18 13:28:59.488187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:29.480 [2024-11-18 13:28:59.488198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.481 [2024-11-18 13:28:59.488727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.481 [2024-11-18 13:28:59.488751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:29.481 [2024-11-18 13:28:59.488850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:29.481 [2024-11-18 13:28:59.488884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:29.481 [2024-11-18 13:28:59.489045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.481 [2024-11-18 13:28:59.489061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.481 [2024-11-18 13:28:59.489346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:29.481 [2024-11-18 13:28:59.489504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.481 [2024-11-18 13:28:59.489523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.481 [2024-11-18 13:28:59.489673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.481 pt4 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.481 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.740 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.740 "name": "raid_bdev1", 00:12:29.740 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:29.740 "strip_size_kb": 0, 00:12:29.740 "state": "online", 00:12:29.740 "raid_level": "raid1", 00:12:29.740 "superblock": true, 00:12:29.740 "num_base_bdevs": 4, 00:12:29.740 "num_base_bdevs_discovered": 3, 00:12:29.740 "num_base_bdevs_operational": 3, 00:12:29.740 "base_bdevs_list": [ 00:12:29.740 { 00:12:29.740 "name": null, 00:12:29.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.740 "is_configured": false, 00:12:29.740 "data_offset": 2048, 00:12:29.740 "data_size": 63488 00:12:29.740 }, 00:12:29.740 { 00:12:29.740 "name": "pt2", 00:12:29.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.740 "is_configured": true, 00:12:29.740 "data_offset": 2048, 00:12:29.740 "data_size": 63488 00:12:29.740 }, 00:12:29.740 { 00:12:29.740 "name": "pt3", 00:12:29.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.740 "is_configured": true, 00:12:29.740 "data_offset": 2048, 00:12:29.740 "data_size": 63488 00:12:29.740 }, 00:12:29.740 { 00:12:29.740 "name": "pt4", 00:12:29.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:29.740 "is_configured": true, 00:12:29.740 "data_offset": 2048, 00:12:29.740 "data_size": 63488 00:12:29.740 } 00:12:29.740 ] 00:12:29.740 }' 00:12:29.740 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.740 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.000 [2024-11-18 13:28:59.907291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.000 [2024-11-18 13:28:59.907325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.000 [2024-11-18 13:28:59.907415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.000 [2024-11-18 13:28:59.907516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.000 [2024-11-18 13:28:59.907535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:30.000 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 [2024-11-18 13:28:59.979203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.001 [2024-11-18 13:28:59.979272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.001 [2024-11-18 13:28:59.979292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:30.001 [2024-11-18 13:28:59.979306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.001 [2024-11-18 13:28:59.981884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.001 [2024-11-18 13:28:59.981925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.001 [2024-11-18 13:28:59.982016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:30.001 [2024-11-18 13:28:59.982069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:30.001 [2024-11-18 13:28:59.982239] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:30.001 [2024-11-18 13:28:59.982267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.001 [2024-11-18 13:28:59.982284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:30.001 [2024-11-18 13:28:59.982365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.001 [2024-11-18 13:28:59.982479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.001 pt1 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.001 13:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.001 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.001 "name": "raid_bdev1", 00:12:30.001 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:30.001 "strip_size_kb": 0, 00:12:30.001 "state": "configuring", 00:12:30.001 "raid_level": "raid1", 00:12:30.001 "superblock": true, 00:12:30.001 "num_base_bdevs": 4, 00:12:30.001 "num_base_bdevs_discovered": 2, 00:12:30.001 "num_base_bdevs_operational": 3, 00:12:30.001 "base_bdevs_list": [ 00:12:30.001 { 00:12:30.001 "name": null, 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.001 "is_configured": false, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": "pt2", 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.001 "is_configured": true, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": "pt3", 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.001 "is_configured": true, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 }, 00:12:30.001 { 00:12:30.001 "name": null, 00:12:30.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.001 "is_configured": false, 00:12:30.001 "data_offset": 2048, 00:12:30.001 "data_size": 63488 00:12:30.001 } 00:12:30.001 ] 00:12:30.001 }' 00:12:30.001 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.001 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.628 [2024-11-18 13:29:00.442440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:30.628 [2024-11-18 13:29:00.442516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.628 [2024-11-18 13:29:00.442541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:30.628 [2024-11-18 13:29:00.442551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.628 [2024-11-18 13:29:00.443074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.628 [2024-11-18 13:29:00.443099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:30.628 [2024-11-18 13:29:00.443234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:30.628 [2024-11-18 13:29:00.443276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:30.628 [2024-11-18 13:29:00.443439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:30.628 [2024-11-18 13:29:00.443453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.628 [2024-11-18 13:29:00.443732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:30.628 [2024-11-18 13:29:00.443901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:30.628 [2024-11-18 13:29:00.443918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:30.628 [2024-11-18 13:29:00.444070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.628 pt4 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.628 "name": "raid_bdev1", 00:12:30.628 "uuid": "188076c4-d7b3-4346-884b-5fa3469a5747", 00:12:30.628 "strip_size_kb": 0, 00:12:30.628 "state": "online", 00:12:30.628 "raid_level": "raid1", 00:12:30.628 "superblock": true, 00:12:30.628 "num_base_bdevs": 4, 00:12:30.628 "num_base_bdevs_discovered": 3, 00:12:30.628 "num_base_bdevs_operational": 3, 00:12:30.628 "base_bdevs_list": [ 00:12:30.628 { 00:12:30.628 "name": null, 00:12:30.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.628 "is_configured": false, 00:12:30.628 "data_offset": 2048, 00:12:30.628 "data_size": 63488 00:12:30.628 }, 00:12:30.628 { 00:12:30.628 "name": "pt2", 00:12:30.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.628 "is_configured": true, 00:12:30.628 "data_offset": 2048, 00:12:30.628 "data_size": 63488 00:12:30.628 }, 00:12:30.628 { 00:12:30.628 "name": "pt3", 00:12:30.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.628 "is_configured": true, 00:12:30.628 "data_offset": 2048, 00:12:30.628 "data_size": 63488 00:12:30.628 }, 00:12:30.628 { 00:12:30.628 "name": "pt4", 00:12:30.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:30.628 "is_configured": true, 00:12:30.628 "data_offset": 2048, 00:12:30.628 "data_size": 63488 00:12:30.628 } 00:12:30.628 ] 00:12:30.628 }' 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.628 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:30.888 [2024-11-18 13:29:00.897958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.888 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.147 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 188076c4-d7b3-4346-884b-5fa3469a5747 '!=' 188076c4-d7b3-4346-884b-5fa3469a5747 ']' 00:12:31.147 13:29:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74531 00:12:31.147 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74531 ']' 00:12:31.147 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74531 00:12:31.147 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74531 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.148 killing process with pid 74531 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74531' 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74531 00:12:31.148 [2024-11-18 13:29:00.984538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.148 [2024-11-18 13:29:00.984652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.148 [2024-11-18 13:29:00.984741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.148 [2024-11-18 13:29:00.984759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:31.148 13:29:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74531 00:12:31.407 [2024-11-18 13:29:01.422005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.787 13:29:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:32.787 00:12:32.787 real 0m8.526s 00:12:32.787 user 0m13.164s 00:12:32.787 sys 0m1.662s 00:12:32.787 13:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.787 13:29:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 ************************************ 00:12:32.787 END TEST raid_superblock_test 00:12:32.787 ************************************ 00:12:32.787 13:29:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:32.787 13:29:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.787 13:29:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.787 13:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 ************************************ 00:12:32.787 START TEST raid_read_error_test 00:12:32.787 ************************************ 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V5mkxwma3p 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75018 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75018 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75018 ']' 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.787 13:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.787 [2024-11-18 13:29:02.802727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:32.787 [2024-11-18 13:29:02.802873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75018 ] 00:12:33.046 [2024-11-18 13:29:02.965801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.046 [2024-11-18 13:29:03.097260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.304 [2024-11-18 13:29:03.332492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.304 [2024-11-18 13:29:03.332543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 BaseBdev1_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 true 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 [2024-11-18 13:29:03.704296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.872 [2024-11-18 13:29:03.704355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.872 [2024-11-18 13:29:03.704376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.872 [2024-11-18 13:29:03.704388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.872 [2024-11-18 13:29:03.706818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.872 [2024-11-18 13:29:03.706859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.872 BaseBdev1 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 BaseBdev2_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 true 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 [2024-11-18 13:29:03.772587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.872 [2024-11-18 13:29:03.772674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.872 [2024-11-18 13:29:03.772690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.872 [2024-11-18 13:29:03.772701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.872 [2024-11-18 13:29:03.775036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.872 [2024-11-18 13:29:03.775079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.872 BaseBdev2 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 BaseBdev3_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 true 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.872 [2024-11-18 13:29:03.852756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.872 [2024-11-18 13:29:03.852815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.872 [2024-11-18 13:29:03.852832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.872 [2024-11-18 13:29:03.852843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.872 [2024-11-18 13:29:03.855202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.872 [2024-11-18 13:29:03.855238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.872 BaseBdev3 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.872 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.873 BaseBdev4_malloc 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.873 true 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.873 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.131 [2024-11-18 13:29:03.928875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:34.131 [2024-11-18 13:29:03.928933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.131 [2024-11-18 13:29:03.928951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:34.131 [2024-11-18 13:29:03.928961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.131 [2024-11-18 13:29:03.931343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.131 [2024-11-18 13:29:03.931382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:34.131 BaseBdev4 00:12:34.131 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.131 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:34.131 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.131 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.131 [2024-11-18 13:29:03.940918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.132 [2024-11-18 13:29:03.942974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.132 [2024-11-18 13:29:03.943059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.132 [2024-11-18 13:29:03.943122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.132 [2024-11-18 13:29:03.943359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:34.132 [2024-11-18 13:29:03.943379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.132 [2024-11-18 13:29:03.943620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:34.132 [2024-11-18 13:29:03.943798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:34.132 [2024-11-18 13:29:03.943813] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:34.132 [2024-11-18 13:29:03.943970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.132 "name": "raid_bdev1", 00:12:34.132 "uuid": "66c32d1e-7734-4bed-a6e1-ec08651d29c1", 00:12:34.132 "strip_size_kb": 0, 00:12:34.132 "state": "online", 00:12:34.132 "raid_level": "raid1", 00:12:34.132 "superblock": true, 00:12:34.132 "num_base_bdevs": 4, 00:12:34.132 "num_base_bdevs_discovered": 4, 00:12:34.132 "num_base_bdevs_operational": 4, 00:12:34.132 "base_bdevs_list": [ 00:12:34.132 { 00:12:34.132 "name": "BaseBdev1", 00:12:34.132 "uuid": "201ed4d0-92a4-577b-9572-ed41104a5b09", 00:12:34.132 "is_configured": true, 00:12:34.132 "data_offset": 2048, 00:12:34.132 "data_size": 63488 00:12:34.132 }, 00:12:34.132 { 00:12:34.132 "name": "BaseBdev2", 00:12:34.132 "uuid": "6af4cd66-56d8-587b-92b9-b62e6d69331d", 00:12:34.132 "is_configured": true, 00:12:34.132 "data_offset": 2048, 00:12:34.132 "data_size": 63488 00:12:34.132 }, 00:12:34.132 { 00:12:34.132 "name": "BaseBdev3", 00:12:34.132 "uuid": "906068d2-5cdb-546a-9cf2-ece0d6f40e11", 00:12:34.132 "is_configured": true, 00:12:34.132 "data_offset": 2048, 00:12:34.132 "data_size": 63488 00:12:34.132 }, 00:12:34.132 { 00:12:34.132 "name": "BaseBdev4", 00:12:34.132 "uuid": "5e8b0c1d-e1fc-5862-8de4-ebc397ceaac1", 00:12:34.132 "is_configured": true, 00:12:34.132 "data_offset": 2048, 00:12:34.132 "data_size": 63488 00:12:34.132 } 00:12:34.132 ] 00:12:34.132 }' 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.132 13:29:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.391 13:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:34.391 13:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:34.391 [2024-11-18 13:29:04.429350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.329 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.588 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.588 "name": "raid_bdev1", 00:12:35.588 "uuid": "66c32d1e-7734-4bed-a6e1-ec08651d29c1", 00:12:35.588 "strip_size_kb": 0, 00:12:35.588 "state": "online", 00:12:35.588 "raid_level": "raid1", 00:12:35.588 "superblock": true, 00:12:35.588 "num_base_bdevs": 4, 00:12:35.588 "num_base_bdevs_discovered": 4, 00:12:35.588 "num_base_bdevs_operational": 4, 00:12:35.588 "base_bdevs_list": [ 00:12:35.588 { 00:12:35.588 "name": "BaseBdev1", 00:12:35.588 "uuid": "201ed4d0-92a4-577b-9572-ed41104a5b09", 00:12:35.588 "is_configured": true, 00:12:35.588 "data_offset": 2048, 00:12:35.588 "data_size": 63488 00:12:35.588 }, 00:12:35.588 { 00:12:35.588 "name": "BaseBdev2", 00:12:35.588 "uuid": "6af4cd66-56d8-587b-92b9-b62e6d69331d", 00:12:35.588 "is_configured": true, 00:12:35.588 "data_offset": 2048, 00:12:35.588 "data_size": 63488 00:12:35.588 }, 00:12:35.588 { 00:12:35.588 "name": "BaseBdev3", 00:12:35.588 "uuid": "906068d2-5cdb-546a-9cf2-ece0d6f40e11", 00:12:35.588 "is_configured": true, 00:12:35.588 "data_offset": 2048, 00:12:35.588 "data_size": 63488 00:12:35.588 }, 00:12:35.588 { 00:12:35.588 "name": "BaseBdev4", 00:12:35.588 "uuid": "5e8b0c1d-e1fc-5862-8de4-ebc397ceaac1", 00:12:35.588 "is_configured": true, 00:12:35.588 "data_offset": 2048, 00:12:35.588 "data_size": 63488 00:12:35.588 } 00:12:35.588 ] 00:12:35.588 }' 00:12:35.588 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.588 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.848 [2024-11-18 13:29:05.762500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.848 [2024-11-18 13:29:05.762543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.848 [2024-11-18 13:29:05.765314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.848 [2024-11-18 13:29:05.765389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.848 [2024-11-18 13:29:05.765522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.848 [2024-11-18 13:29:05.765542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:35.848 { 00:12:35.848 "results": [ 00:12:35.848 { 00:12:35.848 "job": "raid_bdev1", 00:12:35.848 "core_mask": "0x1", 00:12:35.848 "workload": "randrw", 00:12:35.848 "percentage": 50, 00:12:35.848 "status": "finished", 00:12:35.848 "queue_depth": 1, 00:12:35.848 "io_size": 131072, 00:12:35.848 "runtime": 1.333654, 00:12:35.848 "iops": 7824.368239438415, 00:12:35.848 "mibps": 978.0460299298019, 00:12:35.848 "io_failed": 0, 00:12:35.848 "io_timeout": 0, 00:12:35.848 "avg_latency_us": 125.25995158215862, 00:12:35.848 "min_latency_us": 23.02882096069869, 00:12:35.848 "max_latency_us": 1430.9170305676855 00:12:35.848 } 00:12:35.848 ], 00:12:35.848 "core_count": 1 00:12:35.848 } 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75018 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75018 ']' 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75018 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75018 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.848 killing process with pid 75018 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75018' 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75018 00:12:35.848 [2024-11-18 13:29:05.811893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.848 13:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75018 00:12:36.442 [2024-11-18 13:29:06.166910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V5mkxwma3p 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:37.385 00:12:37.385 real 0m4.744s 00:12:37.385 user 0m5.408s 00:12:37.385 sys 0m0.697s 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.385 13:29:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.385 ************************************ 00:12:37.385 END TEST raid_read_error_test 00:12:37.385 ************************************ 00:12:37.647 13:29:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:37.647 13:29:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:37.647 13:29:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.647 13:29:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 ************************************ 00:12:37.647 START TEST raid_write_error_test 00:12:37.647 ************************************ 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FHQiE5RIw4 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75164 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75164 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75164 ']' 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.647 13:29:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.647 [2024-11-18 13:29:07.621354] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:37.647 [2024-11-18 13:29:07.621490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75164 ] 00:12:37.907 [2024-11-18 13:29:07.803588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.907 [2024-11-18 13:29:07.945099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.165 [2024-11-18 13:29:08.188153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.165 [2024-11-18 13:29:08.188232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 BaseBdev1_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 true 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 [2024-11-18 13:29:08.547959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.734 [2024-11-18 13:29:08.548025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.734 [2024-11-18 13:29:08.548050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.734 [2024-11-18 13:29:08.548063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.734 [2024-11-18 13:29:08.550695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.734 [2024-11-18 13:29:08.550737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.734 BaseBdev1 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 BaseBdev2_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.734 true 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.734 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 [2024-11-18 13:29:08.621540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.735 [2024-11-18 13:29:08.621609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.735 [2024-11-18 13:29:08.621627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:38.735 [2024-11-18 13:29:08.621640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.735 [2024-11-18 13:29:08.624117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.735 [2024-11-18 13:29:08.624163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.735 BaseBdev2 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 BaseBdev3_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 true 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 [2024-11-18 13:29:08.705003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:38.735 [2024-11-18 13:29:08.705061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.735 [2024-11-18 13:29:08.705095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:38.735 [2024-11-18 13:29:08.705106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.735 [2024-11-18 13:29:08.707532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.735 [2024-11-18 13:29:08.707572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:38.735 BaseBdev3 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 BaseBdev4_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 true 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.735 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.735 [2024-11-18 13:29:08.781638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:38.735 [2024-11-18 13:29:08.781703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.735 [2024-11-18 13:29:08.781752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:38.735 [2024-11-18 13:29:08.781763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.735 [2024-11-18 13:29:08.784355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.735 [2024-11-18 13:29:08.784399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:38.735 BaseBdev4 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 [2024-11-18 13:29:08.793667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.995 [2024-11-18 13:29:08.795816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.995 [2024-11-18 13:29:08.795912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.995 [2024-11-18 13:29:08.795976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.995 [2024-11-18 13:29:08.796241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:38.995 [2024-11-18 13:29:08.796262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.995 [2024-11-18 13:29:08.796537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:38.995 [2024-11-18 13:29:08.796726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:38.995 [2024-11-18 13:29:08.796742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:38.995 [2024-11-18 13:29:08.796911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.995 "name": "raid_bdev1", 00:12:38.995 "uuid": "76263d55-5f4d-4eab-9066-d5fb718f5a23", 00:12:38.995 "strip_size_kb": 0, 00:12:38.995 "state": "online", 00:12:38.995 "raid_level": "raid1", 00:12:38.995 "superblock": true, 00:12:38.995 "num_base_bdevs": 4, 00:12:38.995 "num_base_bdevs_discovered": 4, 00:12:38.995 "num_base_bdevs_operational": 4, 00:12:38.995 "base_bdevs_list": [ 00:12:38.995 { 00:12:38.995 "name": "BaseBdev1", 00:12:38.995 "uuid": "065dee91-a769-54ac-8dc2-7a7f0559335b", 00:12:38.995 "is_configured": true, 00:12:38.995 "data_offset": 2048, 00:12:38.995 "data_size": 63488 00:12:38.995 }, 00:12:38.995 { 00:12:38.995 "name": "BaseBdev2", 00:12:38.995 "uuid": "5226ee63-4202-5c1e-b965-a6ad7e314e2f", 00:12:38.995 "is_configured": true, 00:12:38.995 "data_offset": 2048, 00:12:38.995 "data_size": 63488 00:12:38.995 }, 00:12:38.995 { 00:12:38.995 "name": "BaseBdev3", 00:12:38.995 "uuid": "8fb8b4f6-d6f9-5384-bb6a-9d54f8c5eb72", 00:12:38.995 "is_configured": true, 00:12:38.995 "data_offset": 2048, 00:12:38.995 "data_size": 63488 00:12:38.995 }, 00:12:38.995 { 00:12:38.995 "name": "BaseBdev4", 00:12:38.995 "uuid": "55941609-ec0a-527e-b8e6-b3fb01ac6890", 00:12:38.995 "is_configured": true, 00:12:38.995 "data_offset": 2048, 00:12:38.995 "data_size": 63488 00:12:38.995 } 00:12:38.995 ] 00:12:38.995 }' 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.995 13:29:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.254 13:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.254 13:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.512 [2024-11-18 13:29:09.370073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.450 [2024-11-18 13:29:10.283490] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:40.450 [2024-11-18 13:29:10.283558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:40.450 [2024-11-18 13:29:10.283812] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.450 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.451 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.451 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.451 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.451 "name": "raid_bdev1", 00:12:40.451 "uuid": "76263d55-5f4d-4eab-9066-d5fb718f5a23", 00:12:40.451 "strip_size_kb": 0, 00:12:40.451 "state": "online", 00:12:40.451 "raid_level": "raid1", 00:12:40.451 "superblock": true, 00:12:40.451 "num_base_bdevs": 4, 00:12:40.451 "num_base_bdevs_discovered": 3, 00:12:40.451 "num_base_bdevs_operational": 3, 00:12:40.451 "base_bdevs_list": [ 00:12:40.451 { 00:12:40.451 "name": null, 00:12:40.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.451 "is_configured": false, 00:12:40.451 "data_offset": 0, 00:12:40.451 "data_size": 63488 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "name": "BaseBdev2", 00:12:40.451 "uuid": "5226ee63-4202-5c1e-b965-a6ad7e314e2f", 00:12:40.451 "is_configured": true, 00:12:40.451 "data_offset": 2048, 00:12:40.451 "data_size": 63488 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "name": "BaseBdev3", 00:12:40.451 "uuid": "8fb8b4f6-d6f9-5384-bb6a-9d54f8c5eb72", 00:12:40.451 "is_configured": true, 00:12:40.451 "data_offset": 2048, 00:12:40.451 "data_size": 63488 00:12:40.451 }, 00:12:40.451 { 00:12:40.451 "name": "BaseBdev4", 00:12:40.451 "uuid": "55941609-ec0a-527e-b8e6-b3fb01ac6890", 00:12:40.451 "is_configured": true, 00:12:40.451 "data_offset": 2048, 00:12:40.451 "data_size": 63488 00:12:40.451 } 00:12:40.451 ] 00:12:40.451 }' 00:12:40.451 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.451 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.711 [2024-11-18 13:29:10.729640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:40.711 [2024-11-18 13:29:10.729766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:40.711 [2024-11-18 13:29:10.732531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.711 [2024-11-18 13:29:10.732580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.711 [2024-11-18 13:29:10.732693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.711 [2024-11-18 13:29:10.732706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:40.711 { 00:12:40.711 "results": [ 00:12:40.711 { 00:12:40.711 "job": "raid_bdev1", 00:12:40.711 "core_mask": "0x1", 00:12:40.711 "workload": "randrw", 00:12:40.711 "percentage": 50, 00:12:40.711 "status": "finished", 00:12:40.711 "queue_depth": 1, 00:12:40.711 "io_size": 131072, 00:12:40.711 "runtime": 1.360088, 00:12:40.711 "iops": 8595.767332702002, 00:12:40.711 "mibps": 1074.4709165877503, 00:12:40.711 "io_failed": 0, 00:12:40.711 "io_timeout": 0, 00:12:40.711 "avg_latency_us": 113.68917545277057, 00:12:40.711 "min_latency_us": 23.699563318777294, 00:12:40.711 "max_latency_us": 1502.46288209607 00:12:40.711 } 00:12:40.711 ], 00:12:40.711 "core_count": 1 00:12:40.711 } 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75164 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75164 ']' 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75164 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.711 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75164 00:12:40.970 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.970 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.970 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75164' 00:12:40.970 killing process with pid 75164 00:12:40.970 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75164 00:12:40.970 [2024-11-18 13:29:10.780580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.970 13:29:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75164 00:12:41.230 [2024-11-18 13:29:11.135425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FHQiE5RIw4 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.608 00:12:42.608 real 0m4.901s 00:12:42.608 user 0m5.649s 00:12:42.608 sys 0m0.754s 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.608 13:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.608 ************************************ 00:12:42.608 END TEST raid_write_error_test 00:12:42.608 ************************************ 00:12:42.608 13:29:12 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:42.608 13:29:12 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:42.608 13:29:12 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:42.608 13:29:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:42.608 13:29:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.608 13:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.608 ************************************ 00:12:42.608 START TEST raid_rebuild_test 00:12:42.608 ************************************ 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75313 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75313 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75313 ']' 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.608 13:29:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.608 [2024-11-18 13:29:12.582238] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:42.608 [2024-11-18 13:29:12.582477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75313 ] 00:12:42.608 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.608 Zero copy mechanism will not be used. 00:12:42.867 [2024-11-18 13:29:12.762529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.867 [2024-11-18 13:29:12.898814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.125 [2024-11-18 13:29:13.135220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.125 [2024-11-18 13:29:13.135369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.384 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 BaseBdev1_malloc 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 [2024-11-18 13:29:13.479013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.643 [2024-11-18 13:29:13.479101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.643 [2024-11-18 13:29:13.479127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:43.643 [2024-11-18 13:29:13.479139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.643 [2024-11-18 13:29:13.481570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.643 [2024-11-18 13:29:13.481607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.643 BaseBdev1 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 BaseBdev2_malloc 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.643 [2024-11-18 13:29:13.541423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:43.643 [2024-11-18 13:29:13.541575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.643 [2024-11-18 13:29:13.541604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:43.643 [2024-11-18 13:29:13.541619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.643 [2024-11-18 13:29:13.544169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.643 [2024-11-18 13:29:13.544207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.643 BaseBdev2 00:12:43.643 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 spare_malloc 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 spare_delay 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 [2024-11-18 13:29:13.625560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.644 [2024-11-18 13:29:13.625629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.644 [2024-11-18 13:29:13.625650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:43.644 [2024-11-18 13:29:13.625662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.644 [2024-11-18 13:29:13.628123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.644 [2024-11-18 13:29:13.628237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.644 spare 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 [2024-11-18 13:29:13.637599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.644 [2024-11-18 13:29:13.639706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.644 [2024-11-18 13:29:13.639851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:43.644 [2024-11-18 13:29:13.639869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:43.644 [2024-11-18 13:29:13.640124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:43.644 [2024-11-18 13:29:13.640323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:43.644 [2024-11-18 13:29:13.640335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:43.644 [2024-11-18 13:29:13.640482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.644 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.903 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.903 "name": "raid_bdev1", 00:12:43.903 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:43.903 "strip_size_kb": 0, 00:12:43.903 "state": "online", 00:12:43.903 "raid_level": "raid1", 00:12:43.903 "superblock": false, 00:12:43.903 "num_base_bdevs": 2, 00:12:43.903 "num_base_bdevs_discovered": 2, 00:12:43.903 "num_base_bdevs_operational": 2, 00:12:43.903 "base_bdevs_list": [ 00:12:43.903 { 00:12:43.903 "name": "BaseBdev1", 00:12:43.903 "uuid": "ceba0d81-cc18-52bb-a100-66b5f47f1ae2", 00:12:43.903 "is_configured": true, 00:12:43.903 "data_offset": 0, 00:12:43.903 "data_size": 65536 00:12:43.903 }, 00:12:43.903 { 00:12:43.903 "name": "BaseBdev2", 00:12:43.903 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:43.903 "is_configured": true, 00:12:43.903 "data_offset": 0, 00:12:43.903 "data_size": 65536 00:12:43.903 } 00:12:43.903 ] 00:12:43.903 }' 00:12:43.903 13:29:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.903 13:29:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.163 [2024-11-18 13:29:14.093079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.163 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:44.422 [2024-11-18 13:29:14.344477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:44.422 /dev/nbd0 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.422 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.423 1+0 records in 00:12:44.423 1+0 records out 00:12:44.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509402 s, 8.0 MB/s 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:44.423 13:29:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:49.765 65536+0 records in 00:12:49.765 65536+0 records out 00:12:49.765 33554432 bytes (34 MB, 32 MiB) copied, 4.68159 s, 7.2 MB/s 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:49.765 [2024-11-18 13:29:19.302177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:49.765 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.766 [2024-11-18 13:29:19.338190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.766 "name": "raid_bdev1", 00:12:49.766 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:49.766 "strip_size_kb": 0, 00:12:49.766 "state": "online", 00:12:49.766 "raid_level": "raid1", 00:12:49.766 "superblock": false, 00:12:49.766 "num_base_bdevs": 2, 00:12:49.766 "num_base_bdevs_discovered": 1, 00:12:49.766 "num_base_bdevs_operational": 1, 00:12:49.766 "base_bdevs_list": [ 00:12:49.766 { 00:12:49.766 "name": null, 00:12:49.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.766 "is_configured": false, 00:12:49.766 "data_offset": 0, 00:12:49.766 "data_size": 65536 00:12:49.766 }, 00:12:49.766 { 00:12:49.766 "name": "BaseBdev2", 00:12:49.766 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:49.766 "is_configured": true, 00:12:49.766 "data_offset": 0, 00:12:49.766 "data_size": 65536 00:12:49.766 } 00:12:49.766 ] 00:12:49.766 }' 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.766 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.025 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.025 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.025 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.025 [2024-11-18 13:29:19.813411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.025 [2024-11-18 13:29:19.831561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:50.025 13:29:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.025 13:29:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:50.025 [2024-11-18 13:29:19.833519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.964 "name": "raid_bdev1", 00:12:50.964 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:50.964 "strip_size_kb": 0, 00:12:50.964 "state": "online", 00:12:50.964 "raid_level": "raid1", 00:12:50.964 "superblock": false, 00:12:50.964 "num_base_bdevs": 2, 00:12:50.964 "num_base_bdevs_discovered": 2, 00:12:50.964 "num_base_bdevs_operational": 2, 00:12:50.964 "process": { 00:12:50.964 "type": "rebuild", 00:12:50.964 "target": "spare", 00:12:50.964 "progress": { 00:12:50.964 "blocks": 20480, 00:12:50.964 "percent": 31 00:12:50.964 } 00:12:50.964 }, 00:12:50.964 "base_bdevs_list": [ 00:12:50.964 { 00:12:50.964 "name": "spare", 00:12:50.964 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:50.964 "is_configured": true, 00:12:50.964 "data_offset": 0, 00:12:50.964 "data_size": 65536 00:12:50.964 }, 00:12:50.964 { 00:12:50.964 "name": "BaseBdev2", 00:12:50.964 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:50.964 "is_configured": true, 00:12:50.964 "data_offset": 0, 00:12:50.964 "data_size": 65536 00:12:50.964 } 00:12:50.964 ] 00:12:50.964 }' 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.964 13:29:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.965 [2024-11-18 13:29:20.988919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.224 [2024-11-18 13:29:21.039305] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.224 [2024-11-18 13:29:21.039371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.224 [2024-11-18 13:29:21.039387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.224 [2024-11-18 13:29:21.039397] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.224 "name": "raid_bdev1", 00:12:51.224 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:51.224 "strip_size_kb": 0, 00:12:51.224 "state": "online", 00:12:51.224 "raid_level": "raid1", 00:12:51.224 "superblock": false, 00:12:51.224 "num_base_bdevs": 2, 00:12:51.224 "num_base_bdevs_discovered": 1, 00:12:51.224 "num_base_bdevs_operational": 1, 00:12:51.224 "base_bdevs_list": [ 00:12:51.224 { 00:12:51.224 "name": null, 00:12:51.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.224 "is_configured": false, 00:12:51.224 "data_offset": 0, 00:12:51.224 "data_size": 65536 00:12:51.224 }, 00:12:51.224 { 00:12:51.224 "name": "BaseBdev2", 00:12:51.224 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:51.224 "is_configured": true, 00:12:51.224 "data_offset": 0, 00:12:51.224 "data_size": 65536 00:12:51.224 } 00:12:51.224 ] 00:12:51.224 }' 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.224 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.484 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.744 "name": "raid_bdev1", 00:12:51.744 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:51.744 "strip_size_kb": 0, 00:12:51.744 "state": "online", 00:12:51.744 "raid_level": "raid1", 00:12:51.744 "superblock": false, 00:12:51.744 "num_base_bdevs": 2, 00:12:51.744 "num_base_bdevs_discovered": 1, 00:12:51.744 "num_base_bdevs_operational": 1, 00:12:51.744 "base_bdevs_list": [ 00:12:51.744 { 00:12:51.744 "name": null, 00:12:51.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.744 "is_configured": false, 00:12:51.744 "data_offset": 0, 00:12:51.744 "data_size": 65536 00:12:51.744 }, 00:12:51.744 { 00:12:51.744 "name": "BaseBdev2", 00:12:51.744 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:51.744 "is_configured": true, 00:12:51.744 "data_offset": 0, 00:12:51.744 "data_size": 65536 00:12:51.744 } 00:12:51.744 ] 00:12:51.744 }' 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.744 [2024-11-18 13:29:21.667066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.744 [2024-11-18 13:29:21.683223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.744 13:29:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.744 [2024-11-18 13:29:21.685121] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.683 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.943 "name": "raid_bdev1", 00:12:52.943 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:52.943 "strip_size_kb": 0, 00:12:52.943 "state": "online", 00:12:52.943 "raid_level": "raid1", 00:12:52.943 "superblock": false, 00:12:52.943 "num_base_bdevs": 2, 00:12:52.943 "num_base_bdevs_discovered": 2, 00:12:52.943 "num_base_bdevs_operational": 2, 00:12:52.943 "process": { 00:12:52.943 "type": "rebuild", 00:12:52.943 "target": "spare", 00:12:52.943 "progress": { 00:12:52.943 "blocks": 20480, 00:12:52.943 "percent": 31 00:12:52.943 } 00:12:52.943 }, 00:12:52.943 "base_bdevs_list": [ 00:12:52.943 { 00:12:52.943 "name": "spare", 00:12:52.943 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:52.943 "is_configured": true, 00:12:52.943 "data_offset": 0, 00:12:52.943 "data_size": 65536 00:12:52.943 }, 00:12:52.943 { 00:12:52.943 "name": "BaseBdev2", 00:12:52.943 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:52.943 "is_configured": true, 00:12:52.943 "data_offset": 0, 00:12:52.943 "data_size": 65536 00:12:52.943 } 00:12:52.943 ] 00:12:52.943 }' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=376 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.943 "name": "raid_bdev1", 00:12:52.943 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:52.943 "strip_size_kb": 0, 00:12:52.943 "state": "online", 00:12:52.943 "raid_level": "raid1", 00:12:52.943 "superblock": false, 00:12:52.943 "num_base_bdevs": 2, 00:12:52.943 "num_base_bdevs_discovered": 2, 00:12:52.943 "num_base_bdevs_operational": 2, 00:12:52.943 "process": { 00:12:52.943 "type": "rebuild", 00:12:52.943 "target": "spare", 00:12:52.943 "progress": { 00:12:52.943 "blocks": 22528, 00:12:52.943 "percent": 34 00:12:52.943 } 00:12:52.943 }, 00:12:52.943 "base_bdevs_list": [ 00:12:52.943 { 00:12:52.943 "name": "spare", 00:12:52.943 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:52.943 "is_configured": true, 00:12:52.943 "data_offset": 0, 00:12:52.943 "data_size": 65536 00:12:52.943 }, 00:12:52.943 { 00:12:52.943 "name": "BaseBdev2", 00:12:52.943 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:52.943 "is_configured": true, 00:12:52.943 "data_offset": 0, 00:12:52.943 "data_size": 65536 00:12:52.943 } 00:12:52.943 ] 00:12:52.943 }' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.943 13:29:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.324 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.325 13:29:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.325 "name": "raid_bdev1", 00:12:54.325 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:54.325 "strip_size_kb": 0, 00:12:54.325 "state": "online", 00:12:54.325 "raid_level": "raid1", 00:12:54.325 "superblock": false, 00:12:54.325 "num_base_bdevs": 2, 00:12:54.325 "num_base_bdevs_discovered": 2, 00:12:54.325 "num_base_bdevs_operational": 2, 00:12:54.325 "process": { 00:12:54.325 "type": "rebuild", 00:12:54.325 "target": "spare", 00:12:54.325 "progress": { 00:12:54.325 "blocks": 47104, 00:12:54.325 "percent": 71 00:12:54.325 } 00:12:54.325 }, 00:12:54.325 "base_bdevs_list": [ 00:12:54.325 { 00:12:54.325 "name": "spare", 00:12:54.325 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:54.325 "is_configured": true, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 }, 00:12:54.325 { 00:12:54.325 "name": "BaseBdev2", 00:12:54.325 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:54.325 "is_configured": true, 00:12:54.325 "data_offset": 0, 00:12:54.325 "data_size": 65536 00:12:54.325 } 00:12:54.325 ] 00:12:54.325 }' 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.325 13:29:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.894 [2024-11-18 13:29:24.898777] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:54.894 [2024-11-18 13:29:24.898866] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:54.894 [2024-11-18 13:29:24.898914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.154 "name": "raid_bdev1", 00:12:55.154 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:55.154 "strip_size_kb": 0, 00:12:55.154 "state": "online", 00:12:55.154 "raid_level": "raid1", 00:12:55.154 "superblock": false, 00:12:55.154 "num_base_bdevs": 2, 00:12:55.154 "num_base_bdevs_discovered": 2, 00:12:55.154 "num_base_bdevs_operational": 2, 00:12:55.154 "base_bdevs_list": [ 00:12:55.154 { 00:12:55.154 "name": "spare", 00:12:55.154 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:55.154 "is_configured": true, 00:12:55.154 "data_offset": 0, 00:12:55.154 "data_size": 65536 00:12:55.154 }, 00:12:55.154 { 00:12:55.154 "name": "BaseBdev2", 00:12:55.154 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:55.154 "is_configured": true, 00:12:55.154 "data_offset": 0, 00:12:55.154 "data_size": 65536 00:12:55.154 } 00:12:55.154 ] 00:12:55.154 }' 00:12:55.154 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.414 "name": "raid_bdev1", 00:12:55.414 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:55.414 "strip_size_kb": 0, 00:12:55.414 "state": "online", 00:12:55.414 "raid_level": "raid1", 00:12:55.414 "superblock": false, 00:12:55.414 "num_base_bdevs": 2, 00:12:55.414 "num_base_bdevs_discovered": 2, 00:12:55.414 "num_base_bdevs_operational": 2, 00:12:55.414 "base_bdevs_list": [ 00:12:55.414 { 00:12:55.414 "name": "spare", 00:12:55.414 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:55.414 "is_configured": true, 00:12:55.414 "data_offset": 0, 00:12:55.414 "data_size": 65536 00:12:55.414 }, 00:12:55.414 { 00:12:55.414 "name": "BaseBdev2", 00:12:55.414 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:55.414 "is_configured": true, 00:12:55.414 "data_offset": 0, 00:12:55.414 "data_size": 65536 00:12:55.414 } 00:12:55.414 ] 00:12:55.414 }' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.414 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.415 "name": "raid_bdev1", 00:12:55.415 "uuid": "da118ceb-91c0-4038-be74-4064e21533b5", 00:12:55.415 "strip_size_kb": 0, 00:12:55.415 "state": "online", 00:12:55.415 "raid_level": "raid1", 00:12:55.415 "superblock": false, 00:12:55.415 "num_base_bdevs": 2, 00:12:55.415 "num_base_bdevs_discovered": 2, 00:12:55.415 "num_base_bdevs_operational": 2, 00:12:55.415 "base_bdevs_list": [ 00:12:55.415 { 00:12:55.415 "name": "spare", 00:12:55.415 "uuid": "b1a8be30-2202-5b03-ba16-879f9e6e213c", 00:12:55.415 "is_configured": true, 00:12:55.415 "data_offset": 0, 00:12:55.415 "data_size": 65536 00:12:55.415 }, 00:12:55.415 { 00:12:55.415 "name": "BaseBdev2", 00:12:55.415 "uuid": "0ec24bc2-f8d4-50a9-beeb-0abb354183aa", 00:12:55.415 "is_configured": true, 00:12:55.415 "data_offset": 0, 00:12:55.415 "data_size": 65536 00:12:55.415 } 00:12:55.415 ] 00:12:55.415 }' 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.415 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.986 [2024-11-18 13:29:25.853153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.986 [2024-11-18 13:29:25.853193] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.986 [2024-11-18 13:29:25.853287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.986 [2024-11-18 13:29:25.853359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.986 [2024-11-18 13:29:25.853369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:55.986 13:29:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:56.246 /dev/nbd0 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.246 1+0 records in 00:12:56.246 1+0 records out 00:12:56.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347148 s, 11.8 MB/s 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.246 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:56.506 /dev/nbd1 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.506 1+0 records in 00:12:56.506 1+0 records out 00:12:56.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396842 s, 10.3 MB/s 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.506 13:29:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:56.766 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.026 13:29:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75313 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75313 ']' 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75313 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.026 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75313 00:12:57.287 killing process with pid 75313 00:12:57.287 Received shutdown signal, test time was about 60.000000 seconds 00:12:57.287 00:12:57.287 Latency(us) 00:12:57.287 [2024-11-18T13:29:27.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.287 [2024-11-18T13:29:27.341Z] =================================================================================================================== 00:12:57.287 [2024-11-18T13:29:27.341Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:57.287 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.287 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.287 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75313' 00:12:57.287 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75313 00:12:57.287 [2024-11-18 13:29:27.085115] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.287 13:29:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75313 00:12:57.548 [2024-11-18 13:29:27.375500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:58.487 00:12:58.487 real 0m15.982s 00:12:58.487 user 0m17.567s 00:12:58.487 sys 0m3.398s 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.487 ************************************ 00:12:58.487 END TEST raid_rebuild_test 00:12:58.487 ************************************ 00:12:58.487 13:29:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:58.487 13:29:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:58.487 13:29:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.487 13:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.487 ************************************ 00:12:58.487 START TEST raid_rebuild_test_sb 00:12:58.487 ************************************ 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:58.487 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75731 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75731 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75731 ']' 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.747 13:29:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.747 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:58.747 Zero copy mechanism will not be used. 00:12:58.747 [2024-11-18 13:29:28.629882] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:58.747 [2024-11-18 13:29:28.629988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75731 ] 00:12:59.007 [2024-11-18 13:29:28.799227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.007 [2024-11-18 13:29:28.906614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.267 [2024-11-18 13:29:29.093632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.267 [2024-11-18 13:29:29.093672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.527 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.527 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:59.527 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 BaseBdev1_malloc 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 [2024-11-18 13:29:29.514709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:59.528 [2024-11-18 13:29:29.514876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.528 [2024-11-18 13:29:29.514904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.528 [2024-11-18 13:29:29.514916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.528 [2024-11-18 13:29:29.516969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.528 [2024-11-18 13:29:29.517012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.528 BaseBdev1 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 BaseBdev2_malloc 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 [2024-11-18 13:29:29.567521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:59.528 [2024-11-18 13:29:29.567587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.528 [2024-11-18 13:29:29.567606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.528 [2024-11-18 13:29:29.567618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.528 [2024-11-18 13:29:29.569596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.528 [2024-11-18 13:29:29.569635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.528 BaseBdev2 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 spare_malloc 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 spare_delay 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.788 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.789 [2024-11-18 13:29:29.644575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.789 [2024-11-18 13:29:29.644636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.789 [2024-11-18 13:29:29.644654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:59.789 [2024-11-18 13:29:29.644665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.789 [2024-11-18 13:29:29.646972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.789 [2024-11-18 13:29:29.647013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.789 spare 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.789 [2024-11-18 13:29:29.656611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.789 [2024-11-18 13:29:29.658498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.789 [2024-11-18 13:29:29.658682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.789 [2024-11-18 13:29:29.658699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.789 [2024-11-18 13:29:29.658932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.789 [2024-11-18 13:29:29.659101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.789 [2024-11-18 13:29:29.659110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.789 [2024-11-18 13:29:29.659287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.789 "name": "raid_bdev1", 00:12:59.789 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:12:59.789 "strip_size_kb": 0, 00:12:59.789 "state": "online", 00:12:59.789 "raid_level": "raid1", 00:12:59.789 "superblock": true, 00:12:59.789 "num_base_bdevs": 2, 00:12:59.789 "num_base_bdevs_discovered": 2, 00:12:59.789 "num_base_bdevs_operational": 2, 00:12:59.789 "base_bdevs_list": [ 00:12:59.789 { 00:12:59.789 "name": "BaseBdev1", 00:12:59.789 "uuid": "bfec9914-1f00-53f4-ac47-f6ea303c6734", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "BaseBdev2", 00:12:59.789 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 } 00:12:59.789 ] 00:12:59.789 }' 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.789 13:29:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 [2024-11-18 13:29:30.124085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.359 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:00.359 [2024-11-18 13:29:30.383396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:00.359 /dev/nbd0 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.619 1+0 records in 00:13:00.619 1+0 records out 00:13:00.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557685 s, 7.3 MB/s 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:00.619 13:29:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:04.825 63488+0 records in 00:13:04.825 63488+0 records out 00:13:04.825 32505856 bytes (33 MB, 31 MiB) copied, 4.11614 s, 7.9 MB/s 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:04.825 [2024-11-18 13:29:34.773819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 [2024-11-18 13:29:34.809833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.826 "name": "raid_bdev1", 00:13:04.826 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:04.826 "strip_size_kb": 0, 00:13:04.826 "state": "online", 00:13:04.826 "raid_level": "raid1", 00:13:04.826 "superblock": true, 00:13:04.826 "num_base_bdevs": 2, 00:13:04.826 "num_base_bdevs_discovered": 1, 00:13:04.826 "num_base_bdevs_operational": 1, 00:13:04.826 "base_bdevs_list": [ 00:13:04.826 { 00:13:04.826 "name": null, 00:13:04.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.826 "is_configured": false, 00:13:04.826 "data_offset": 0, 00:13:04.826 "data_size": 63488 00:13:04.826 }, 00:13:04.826 { 00:13:04.826 "name": "BaseBdev2", 00:13:04.826 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:04.826 "is_configured": true, 00:13:04.826 "data_offset": 2048, 00:13:04.826 "data_size": 63488 00:13:04.826 } 00:13:04.826 ] 00:13:04.826 }' 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.826 13:29:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.394 13:29:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.394 13:29:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.394 13:29:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.394 [2024-11-18 13:29:35.257104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.394 [2024-11-18 13:29:35.274270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:05.395 13:29:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 [2024-11-18 13:29:35.276264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.395 13:29:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.334 "name": "raid_bdev1", 00:13:06.334 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:06.334 "strip_size_kb": 0, 00:13:06.334 "state": "online", 00:13:06.334 "raid_level": "raid1", 00:13:06.334 "superblock": true, 00:13:06.334 "num_base_bdevs": 2, 00:13:06.334 "num_base_bdevs_discovered": 2, 00:13:06.334 "num_base_bdevs_operational": 2, 00:13:06.334 "process": { 00:13:06.334 "type": "rebuild", 00:13:06.334 "target": "spare", 00:13:06.334 "progress": { 00:13:06.334 "blocks": 20480, 00:13:06.334 "percent": 32 00:13:06.334 } 00:13:06.334 }, 00:13:06.334 "base_bdevs_list": [ 00:13:06.334 { 00:13:06.334 "name": "spare", 00:13:06.334 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:06.334 "is_configured": true, 00:13:06.334 "data_offset": 2048, 00:13:06.334 "data_size": 63488 00:13:06.334 }, 00:13:06.334 { 00:13:06.334 "name": "BaseBdev2", 00:13:06.334 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:06.334 "is_configured": true, 00:13:06.334 "data_offset": 2048, 00:13:06.334 "data_size": 63488 00:13:06.334 } 00:13:06.334 ] 00:13:06.334 }' 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.334 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.594 [2024-11-18 13:29:36.427693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.594 [2024-11-18 13:29:36.481286] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.594 [2024-11-18 13:29:36.481343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.594 [2024-11-18 13:29:36.481358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.594 [2024-11-18 13:29:36.481370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.594 "name": "raid_bdev1", 00:13:06.594 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:06.594 "strip_size_kb": 0, 00:13:06.594 "state": "online", 00:13:06.594 "raid_level": "raid1", 00:13:06.594 "superblock": true, 00:13:06.594 "num_base_bdevs": 2, 00:13:06.594 "num_base_bdevs_discovered": 1, 00:13:06.594 "num_base_bdevs_operational": 1, 00:13:06.594 "base_bdevs_list": [ 00:13:06.594 { 00:13:06.594 "name": null, 00:13:06.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.594 "is_configured": false, 00:13:06.594 "data_offset": 0, 00:13:06.594 "data_size": 63488 00:13:06.594 }, 00:13:06.594 { 00:13:06.594 "name": "BaseBdev2", 00:13:06.594 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:06.594 "is_configured": true, 00:13:06.594 "data_offset": 2048, 00:13:06.594 "data_size": 63488 00:13:06.594 } 00:13:06.594 ] 00:13:06.594 }' 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.594 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.164 13:29:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.164 "name": "raid_bdev1", 00:13:07.164 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:07.164 "strip_size_kb": 0, 00:13:07.164 "state": "online", 00:13:07.164 "raid_level": "raid1", 00:13:07.164 "superblock": true, 00:13:07.164 "num_base_bdevs": 2, 00:13:07.164 "num_base_bdevs_discovered": 1, 00:13:07.164 "num_base_bdevs_operational": 1, 00:13:07.164 "base_bdevs_list": [ 00:13:07.164 { 00:13:07.164 "name": null, 00:13:07.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.164 "is_configured": false, 00:13:07.164 "data_offset": 0, 00:13:07.164 "data_size": 63488 00:13:07.164 }, 00:13:07.164 { 00:13:07.164 "name": "BaseBdev2", 00:13:07.164 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:07.164 "is_configured": true, 00:13:07.164 "data_offset": 2048, 00:13:07.164 "data_size": 63488 00:13:07.164 } 00:13:07.164 ] 00:13:07.164 }' 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.164 [2024-11-18 13:29:37.114882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.164 [2024-11-18 13:29:37.130644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.164 13:29:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:07.164 [2024-11-18 13:29:37.132505] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.100 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.360 "name": "raid_bdev1", 00:13:08.360 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:08.360 "strip_size_kb": 0, 00:13:08.360 "state": "online", 00:13:08.360 "raid_level": "raid1", 00:13:08.360 "superblock": true, 00:13:08.360 "num_base_bdevs": 2, 00:13:08.360 "num_base_bdevs_discovered": 2, 00:13:08.360 "num_base_bdevs_operational": 2, 00:13:08.360 "process": { 00:13:08.360 "type": "rebuild", 00:13:08.360 "target": "spare", 00:13:08.360 "progress": { 00:13:08.360 "blocks": 20480, 00:13:08.360 "percent": 32 00:13:08.360 } 00:13:08.360 }, 00:13:08.360 "base_bdevs_list": [ 00:13:08.360 { 00:13:08.360 "name": "spare", 00:13:08.360 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:08.360 "is_configured": true, 00:13:08.360 "data_offset": 2048, 00:13:08.360 "data_size": 63488 00:13:08.360 }, 00:13:08.360 { 00:13:08.360 "name": "BaseBdev2", 00:13:08.360 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:08.360 "is_configured": true, 00:13:08.360 "data_offset": 2048, 00:13:08.360 "data_size": 63488 00:13:08.360 } 00:13:08.360 ] 00:13:08.360 }' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:08.360 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.360 "name": "raid_bdev1", 00:13:08.360 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:08.360 "strip_size_kb": 0, 00:13:08.360 "state": "online", 00:13:08.360 "raid_level": "raid1", 00:13:08.360 "superblock": true, 00:13:08.360 "num_base_bdevs": 2, 00:13:08.360 "num_base_bdevs_discovered": 2, 00:13:08.360 "num_base_bdevs_operational": 2, 00:13:08.360 "process": { 00:13:08.360 "type": "rebuild", 00:13:08.360 "target": "spare", 00:13:08.360 "progress": { 00:13:08.360 "blocks": 22528, 00:13:08.360 "percent": 35 00:13:08.360 } 00:13:08.360 }, 00:13:08.360 "base_bdevs_list": [ 00:13:08.360 { 00:13:08.360 "name": "spare", 00:13:08.360 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:08.360 "is_configured": true, 00:13:08.360 "data_offset": 2048, 00:13:08.360 "data_size": 63488 00:13:08.360 }, 00:13:08.360 { 00:13:08.360 "name": "BaseBdev2", 00:13:08.360 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:08.360 "is_configured": true, 00:13:08.360 "data_offset": 2048, 00:13:08.360 "data_size": 63488 00:13:08.360 } 00:13:08.360 ] 00:13:08.360 }' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.360 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.619 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.619 13:29:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.555 "name": "raid_bdev1", 00:13:09.555 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:09.555 "strip_size_kb": 0, 00:13:09.555 "state": "online", 00:13:09.555 "raid_level": "raid1", 00:13:09.555 "superblock": true, 00:13:09.555 "num_base_bdevs": 2, 00:13:09.555 "num_base_bdevs_discovered": 2, 00:13:09.555 "num_base_bdevs_operational": 2, 00:13:09.555 "process": { 00:13:09.555 "type": "rebuild", 00:13:09.555 "target": "spare", 00:13:09.555 "progress": { 00:13:09.555 "blocks": 47104, 00:13:09.555 "percent": 74 00:13:09.555 } 00:13:09.555 }, 00:13:09.555 "base_bdevs_list": [ 00:13:09.555 { 00:13:09.555 "name": "spare", 00:13:09.555 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:09.555 "is_configured": true, 00:13:09.555 "data_offset": 2048, 00:13:09.555 "data_size": 63488 00:13:09.555 }, 00:13:09.555 { 00:13:09.555 "name": "BaseBdev2", 00:13:09.555 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:09.555 "is_configured": true, 00:13:09.555 "data_offset": 2048, 00:13:09.555 "data_size": 63488 00:13:09.555 } 00:13:09.555 ] 00:13:09.555 }' 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.555 13:29:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.493 [2024-11-18 13:29:40.245098] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.493 [2024-11-18 13:29:40.245243] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.493 [2024-11-18 13:29:40.245343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.753 "name": "raid_bdev1", 00:13:10.753 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:10.753 "strip_size_kb": 0, 00:13:10.753 "state": "online", 00:13:10.753 "raid_level": "raid1", 00:13:10.753 "superblock": true, 00:13:10.753 "num_base_bdevs": 2, 00:13:10.753 "num_base_bdevs_discovered": 2, 00:13:10.753 "num_base_bdevs_operational": 2, 00:13:10.753 "base_bdevs_list": [ 00:13:10.753 { 00:13:10.753 "name": "spare", 00:13:10.753 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:10.753 "is_configured": true, 00:13:10.753 "data_offset": 2048, 00:13:10.753 "data_size": 63488 00:13:10.753 }, 00:13:10.753 { 00:13:10.753 "name": "BaseBdev2", 00:13:10.753 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:10.753 "is_configured": true, 00:13:10.753 "data_offset": 2048, 00:13:10.753 "data_size": 63488 00:13:10.753 } 00:13:10.753 ] 00:13:10.753 }' 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.753 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.753 "name": "raid_bdev1", 00:13:10.753 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:10.753 "strip_size_kb": 0, 00:13:10.753 "state": "online", 00:13:10.753 "raid_level": "raid1", 00:13:10.753 "superblock": true, 00:13:10.754 "num_base_bdevs": 2, 00:13:10.754 "num_base_bdevs_discovered": 2, 00:13:10.754 "num_base_bdevs_operational": 2, 00:13:10.754 "base_bdevs_list": [ 00:13:10.754 { 00:13:10.754 "name": "spare", 00:13:10.754 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:10.754 "is_configured": true, 00:13:10.754 "data_offset": 2048, 00:13:10.754 "data_size": 63488 00:13:10.754 }, 00:13:10.754 { 00:13:10.754 "name": "BaseBdev2", 00:13:10.754 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:10.754 "is_configured": true, 00:13:10.754 "data_offset": 2048, 00:13:10.754 "data_size": 63488 00:13:10.754 } 00:13:10.754 ] 00:13:10.754 }' 00:13:10.754 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.014 "name": "raid_bdev1", 00:13:11.014 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:11.014 "strip_size_kb": 0, 00:13:11.014 "state": "online", 00:13:11.014 "raid_level": "raid1", 00:13:11.014 "superblock": true, 00:13:11.014 "num_base_bdevs": 2, 00:13:11.014 "num_base_bdevs_discovered": 2, 00:13:11.014 "num_base_bdevs_operational": 2, 00:13:11.014 "base_bdevs_list": [ 00:13:11.014 { 00:13:11.014 "name": "spare", 00:13:11.014 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:11.014 "is_configured": true, 00:13:11.014 "data_offset": 2048, 00:13:11.014 "data_size": 63488 00:13:11.014 }, 00:13:11.014 { 00:13:11.014 "name": "BaseBdev2", 00:13:11.014 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:11.014 "is_configured": true, 00:13:11.014 "data_offset": 2048, 00:13:11.014 "data_size": 63488 00:13:11.014 } 00:13:11.014 ] 00:13:11.014 }' 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.014 13:29:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.585 [2024-11-18 13:29:41.342504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.585 [2024-11-18 13:29:41.342602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.585 [2024-11-18 13:29:41.342729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.585 [2024-11-18 13:29:41.342813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.585 [2024-11-18 13:29:41.342869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:11.585 /dev/nbd0 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.585 1+0 records in 00:13:11.585 1+0 records out 00:13:11.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389987 s, 10.5 MB/s 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:11.585 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:11.846 /dev/nbd1 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.846 1+0 records in 00:13:11.846 1+0 records out 00:13:11.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417004 s, 9.8 MB/s 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:11.846 13:29:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.106 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.366 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.626 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.627 [2024-11-18 13:29:42.511517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:12.627 [2024-11-18 13:29:42.511577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.627 [2024-11-18 13:29:42.511599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:12.627 [2024-11-18 13:29:42.511609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.627 [2024-11-18 13:29:42.513780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.627 [2024-11-18 13:29:42.513894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:12.627 [2024-11-18 13:29:42.513989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:12.627 [2024-11-18 13:29:42.514041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.627 [2024-11-18 13:29:42.514226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:12.627 spare 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.627 [2024-11-18 13:29:42.614128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:12.627 [2024-11-18 13:29:42.614163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.627 [2024-11-18 13:29:42.614421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:12.627 [2024-11-18 13:29:42.614591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:12.627 [2024-11-18 13:29:42.614600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:12.627 [2024-11-18 13:29:42.614770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.627 "name": "raid_bdev1", 00:13:12.627 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:12.627 "strip_size_kb": 0, 00:13:12.627 "state": "online", 00:13:12.627 "raid_level": "raid1", 00:13:12.627 "superblock": true, 00:13:12.627 "num_base_bdevs": 2, 00:13:12.627 "num_base_bdevs_discovered": 2, 00:13:12.627 "num_base_bdevs_operational": 2, 00:13:12.627 "base_bdevs_list": [ 00:13:12.627 { 00:13:12.627 "name": "spare", 00:13:12.627 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:12.627 "is_configured": true, 00:13:12.627 "data_offset": 2048, 00:13:12.627 "data_size": 63488 00:13:12.627 }, 00:13:12.627 { 00:13:12.627 "name": "BaseBdev2", 00:13:12.627 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:12.627 "is_configured": true, 00:13:12.627 "data_offset": 2048, 00:13:12.627 "data_size": 63488 00:13:12.627 } 00:13:12.627 ] 00:13:12.627 }' 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.627 13:29:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.220 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.221 "name": "raid_bdev1", 00:13:13.221 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:13.221 "strip_size_kb": 0, 00:13:13.221 "state": "online", 00:13:13.221 "raid_level": "raid1", 00:13:13.221 "superblock": true, 00:13:13.221 "num_base_bdevs": 2, 00:13:13.221 "num_base_bdevs_discovered": 2, 00:13:13.221 "num_base_bdevs_operational": 2, 00:13:13.221 "base_bdevs_list": [ 00:13:13.221 { 00:13:13.221 "name": "spare", 00:13:13.221 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:13.221 "is_configured": true, 00:13:13.221 "data_offset": 2048, 00:13:13.221 "data_size": 63488 00:13:13.221 }, 00:13:13.221 { 00:13:13.221 "name": "BaseBdev2", 00:13:13.221 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:13.221 "is_configured": true, 00:13:13.221 "data_offset": 2048, 00:13:13.221 "data_size": 63488 00:13:13.221 } 00:13:13.221 ] 00:13:13.221 }' 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.221 [2024-11-18 13:29:43.246318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.221 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.482 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.482 "name": "raid_bdev1", 00:13:13.482 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:13.482 "strip_size_kb": 0, 00:13:13.482 "state": "online", 00:13:13.482 "raid_level": "raid1", 00:13:13.482 "superblock": true, 00:13:13.482 "num_base_bdevs": 2, 00:13:13.482 "num_base_bdevs_discovered": 1, 00:13:13.482 "num_base_bdevs_operational": 1, 00:13:13.482 "base_bdevs_list": [ 00:13:13.482 { 00:13:13.482 "name": null, 00:13:13.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.482 "is_configured": false, 00:13:13.482 "data_offset": 0, 00:13:13.482 "data_size": 63488 00:13:13.482 }, 00:13:13.482 { 00:13:13.482 "name": "BaseBdev2", 00:13:13.482 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:13.482 "is_configured": true, 00:13:13.482 "data_offset": 2048, 00:13:13.482 "data_size": 63488 00:13:13.482 } 00:13:13.482 ] 00:13:13.482 }' 00:13:13.482 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.482 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.742 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.742 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.742 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.742 [2024-11-18 13:29:43.661650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.742 [2024-11-18 13:29:43.661934] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.742 [2024-11-18 13:29:43.661999] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.742 [2024-11-18 13:29:43.662061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.742 [2024-11-18 13:29:43.677346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:13.742 13:29:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.742 [2024-11-18 13:29:43.679229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.742 13:29:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.682 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.682 "name": "raid_bdev1", 00:13:14.682 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:14.682 "strip_size_kb": 0, 00:13:14.682 "state": "online", 00:13:14.682 "raid_level": "raid1", 00:13:14.682 "superblock": true, 00:13:14.682 "num_base_bdevs": 2, 00:13:14.682 "num_base_bdevs_discovered": 2, 00:13:14.682 "num_base_bdevs_operational": 2, 00:13:14.682 "process": { 00:13:14.682 "type": "rebuild", 00:13:14.682 "target": "spare", 00:13:14.682 "progress": { 00:13:14.682 "blocks": 20480, 00:13:14.682 "percent": 32 00:13:14.682 } 00:13:14.682 }, 00:13:14.682 "base_bdevs_list": [ 00:13:14.682 { 00:13:14.682 "name": "spare", 00:13:14.682 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:14.682 "is_configured": true, 00:13:14.682 "data_offset": 2048, 00:13:14.682 "data_size": 63488 00:13:14.682 }, 00:13:14.682 { 00:13:14.682 "name": "BaseBdev2", 00:13:14.682 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:14.682 "is_configured": true, 00:13:14.682 "data_offset": 2048, 00:13:14.682 "data_size": 63488 00:13:14.682 } 00:13:14.682 ] 00:13:14.682 }' 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.940 [2024-11-18 13:29:44.843338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.940 [2024-11-18 13:29:44.884331] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.940 [2024-11-18 13:29:44.884387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.940 [2024-11-18 13:29:44.884401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.940 [2024-11-18 13:29:44.884409] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.940 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.941 "name": "raid_bdev1", 00:13:14.941 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:14.941 "strip_size_kb": 0, 00:13:14.941 "state": "online", 00:13:14.941 "raid_level": "raid1", 00:13:14.941 "superblock": true, 00:13:14.941 "num_base_bdevs": 2, 00:13:14.941 "num_base_bdevs_discovered": 1, 00:13:14.941 "num_base_bdevs_operational": 1, 00:13:14.941 "base_bdevs_list": [ 00:13:14.941 { 00:13:14.941 "name": null, 00:13:14.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.941 "is_configured": false, 00:13:14.941 "data_offset": 0, 00:13:14.941 "data_size": 63488 00:13:14.941 }, 00:13:14.941 { 00:13:14.941 "name": "BaseBdev2", 00:13:14.941 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:14.941 "is_configured": true, 00:13:14.941 "data_offset": 2048, 00:13:14.941 "data_size": 63488 00:13:14.941 } 00:13:14.941 ] 00:13:14.941 }' 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.941 13:29:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.510 13:29:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.510 13:29:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.510 13:29:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.510 [2024-11-18 13:29:45.293840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.510 [2024-11-18 13:29:45.293986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.510 [2024-11-18 13:29:45.294030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:15.510 [2024-11-18 13:29:45.294060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.510 [2024-11-18 13:29:45.294560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.510 [2024-11-18 13:29:45.294625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.510 [2024-11-18 13:29:45.294764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:15.510 [2024-11-18 13:29:45.294809] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.510 [2024-11-18 13:29:45.294853] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:15.510 [2024-11-18 13:29:45.294928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:15.510 [2024-11-18 13:29:45.311331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:15.510 spare 00:13:15.510 13:29:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.510 [2024-11-18 13:29:45.313242] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.510 13:29:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.448 "name": "raid_bdev1", 00:13:16.448 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:16.448 "strip_size_kb": 0, 00:13:16.448 "state": "online", 00:13:16.448 "raid_level": "raid1", 00:13:16.448 "superblock": true, 00:13:16.448 "num_base_bdevs": 2, 00:13:16.448 "num_base_bdevs_discovered": 2, 00:13:16.448 "num_base_bdevs_operational": 2, 00:13:16.448 "process": { 00:13:16.448 "type": "rebuild", 00:13:16.448 "target": "spare", 00:13:16.448 "progress": { 00:13:16.448 "blocks": 20480, 00:13:16.448 "percent": 32 00:13:16.448 } 00:13:16.448 }, 00:13:16.448 "base_bdevs_list": [ 00:13:16.448 { 00:13:16.448 "name": "spare", 00:13:16.448 "uuid": "efc7dd96-158a-5685-90e3-a9517bd110a2", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 }, 00:13:16.448 { 00:13:16.448 "name": "BaseBdev2", 00:13:16.448 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:16.448 "is_configured": true, 00:13:16.448 "data_offset": 2048, 00:13:16.448 "data_size": 63488 00:13:16.448 } 00:13:16.448 ] 00:13:16.448 }' 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.448 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.448 [2024-11-18 13:29:46.460890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.708 [2024-11-18 13:29:46.518486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.708 [2024-11-18 13:29:46.518585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.708 [2024-11-18 13:29:46.518620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.708 [2024-11-18 13:29:46.518649] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.708 "name": "raid_bdev1", 00:13:16.708 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:16.708 "strip_size_kb": 0, 00:13:16.708 "state": "online", 00:13:16.708 "raid_level": "raid1", 00:13:16.708 "superblock": true, 00:13:16.708 "num_base_bdevs": 2, 00:13:16.708 "num_base_bdevs_discovered": 1, 00:13:16.708 "num_base_bdevs_operational": 1, 00:13:16.708 "base_bdevs_list": [ 00:13:16.708 { 00:13:16.708 "name": null, 00:13:16.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.708 "is_configured": false, 00:13:16.708 "data_offset": 0, 00:13:16.708 "data_size": 63488 00:13:16.708 }, 00:13:16.708 { 00:13:16.708 "name": "BaseBdev2", 00:13:16.708 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:16.708 "is_configured": true, 00:13:16.708 "data_offset": 2048, 00:13:16.708 "data_size": 63488 00:13:16.708 } 00:13:16.708 ] 00:13:16.708 }' 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.708 13:29:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.967 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.226 "name": "raid_bdev1", 00:13:17.226 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:17.226 "strip_size_kb": 0, 00:13:17.226 "state": "online", 00:13:17.226 "raid_level": "raid1", 00:13:17.226 "superblock": true, 00:13:17.226 "num_base_bdevs": 2, 00:13:17.226 "num_base_bdevs_discovered": 1, 00:13:17.226 "num_base_bdevs_operational": 1, 00:13:17.226 "base_bdevs_list": [ 00:13:17.226 { 00:13:17.226 "name": null, 00:13:17.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.226 "is_configured": false, 00:13:17.226 "data_offset": 0, 00:13:17.226 "data_size": 63488 00:13:17.226 }, 00:13:17.226 { 00:13:17.226 "name": "BaseBdev2", 00:13:17.226 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:17.226 "is_configured": true, 00:13:17.226 "data_offset": 2048, 00:13:17.226 "data_size": 63488 00:13:17.226 } 00:13:17.226 ] 00:13:17.226 }' 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.226 [2024-11-18 13:29:47.152685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.226 [2024-11-18 13:29:47.152745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.226 [2024-11-18 13:29:47.152770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:17.226 [2024-11-18 13:29:47.152786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.226 [2024-11-18 13:29:47.153259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.226 [2024-11-18 13:29:47.153278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.226 [2024-11-18 13:29:47.153361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:17.226 [2024-11-18 13:29:47.153375] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:17.226 [2024-11-18 13:29:47.153384] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:17.226 [2024-11-18 13:29:47.153394] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:17.226 BaseBdev1 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.226 13:29:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.164 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.423 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.423 "name": "raid_bdev1", 00:13:18.423 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:18.423 "strip_size_kb": 0, 00:13:18.423 "state": "online", 00:13:18.423 "raid_level": "raid1", 00:13:18.423 "superblock": true, 00:13:18.423 "num_base_bdevs": 2, 00:13:18.423 "num_base_bdevs_discovered": 1, 00:13:18.423 "num_base_bdevs_operational": 1, 00:13:18.423 "base_bdevs_list": [ 00:13:18.423 { 00:13:18.423 "name": null, 00:13:18.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.423 "is_configured": false, 00:13:18.423 "data_offset": 0, 00:13:18.423 "data_size": 63488 00:13:18.423 }, 00:13:18.423 { 00:13:18.423 "name": "BaseBdev2", 00:13:18.423 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:18.423 "is_configured": true, 00:13:18.423 "data_offset": 2048, 00:13:18.423 "data_size": 63488 00:13:18.423 } 00:13:18.423 ] 00:13:18.423 }' 00:13:18.423 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.423 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.682 "name": "raid_bdev1", 00:13:18.682 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:18.682 "strip_size_kb": 0, 00:13:18.682 "state": "online", 00:13:18.682 "raid_level": "raid1", 00:13:18.682 "superblock": true, 00:13:18.682 "num_base_bdevs": 2, 00:13:18.682 "num_base_bdevs_discovered": 1, 00:13:18.682 "num_base_bdevs_operational": 1, 00:13:18.682 "base_bdevs_list": [ 00:13:18.682 { 00:13:18.682 "name": null, 00:13:18.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.682 "is_configured": false, 00:13:18.682 "data_offset": 0, 00:13:18.682 "data_size": 63488 00:13:18.682 }, 00:13:18.682 { 00:13:18.682 "name": "BaseBdev2", 00:13:18.682 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:18.682 "is_configured": true, 00:13:18.682 "data_offset": 2048, 00:13:18.682 "data_size": 63488 00:13:18.682 } 00:13:18.682 ] 00:13:18.682 }' 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.682 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.683 [2024-11-18 13:29:48.658223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.683 [2024-11-18 13:29:48.658452] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.683 [2024-11-18 13:29:48.658510] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:18.683 request: 00:13:18.683 { 00:13:18.683 "base_bdev": "BaseBdev1", 00:13:18.683 "raid_bdev": "raid_bdev1", 00:13:18.683 "method": "bdev_raid_add_base_bdev", 00:13:18.683 "req_id": 1 00:13:18.683 } 00:13:18.683 Got JSON-RPC error response 00:13:18.683 response: 00:13:18.683 { 00:13:18.683 "code": -22, 00:13:18.683 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:18.683 } 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.683 13:29:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:19.621 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.622 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.622 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.881 "name": "raid_bdev1", 00:13:19.881 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:19.881 "strip_size_kb": 0, 00:13:19.881 "state": "online", 00:13:19.881 "raid_level": "raid1", 00:13:19.881 "superblock": true, 00:13:19.881 "num_base_bdevs": 2, 00:13:19.881 "num_base_bdevs_discovered": 1, 00:13:19.881 "num_base_bdevs_operational": 1, 00:13:19.881 "base_bdevs_list": [ 00:13:19.881 { 00:13:19.881 "name": null, 00:13:19.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.881 "is_configured": false, 00:13:19.881 "data_offset": 0, 00:13:19.881 "data_size": 63488 00:13:19.881 }, 00:13:19.881 { 00:13:19.881 "name": "BaseBdev2", 00:13:19.881 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:19.881 "is_configured": true, 00:13:19.881 "data_offset": 2048, 00:13:19.881 "data_size": 63488 00:13:19.881 } 00:13:19.881 ] 00:13:19.881 }' 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.881 13:29:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.141 "name": "raid_bdev1", 00:13:20.141 "uuid": "72646762-762f-4419-a592-6299a0056b25", 00:13:20.141 "strip_size_kb": 0, 00:13:20.141 "state": "online", 00:13:20.141 "raid_level": "raid1", 00:13:20.141 "superblock": true, 00:13:20.141 "num_base_bdevs": 2, 00:13:20.141 "num_base_bdevs_discovered": 1, 00:13:20.141 "num_base_bdevs_operational": 1, 00:13:20.141 "base_bdevs_list": [ 00:13:20.141 { 00:13:20.141 "name": null, 00:13:20.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.141 "is_configured": false, 00:13:20.141 "data_offset": 0, 00:13:20.141 "data_size": 63488 00:13:20.141 }, 00:13:20.141 { 00:13:20.141 "name": "BaseBdev2", 00:13:20.141 "uuid": "0092f36d-39a8-55b5-9355-c27163eb9702", 00:13:20.141 "is_configured": true, 00:13:20.141 "data_offset": 2048, 00:13:20.141 "data_size": 63488 00:13:20.141 } 00:13:20.141 ] 00:13:20.141 }' 00:13:20.141 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75731 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75731 ']' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75731 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75731 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.401 killing process with pid 75731 00:13:20.401 Received shutdown signal, test time was about 60.000000 seconds 00:13:20.401 00:13:20.401 Latency(us) 00:13:20.401 [2024-11-18T13:29:50.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.401 [2024-11-18T13:29:50.455Z] =================================================================================================================== 00:13:20.401 [2024-11-18T13:29:50.455Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75731' 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75731 00:13:20.401 [2024-11-18 13:29:50.293995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.401 [2024-11-18 13:29:50.294144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.401 13:29:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75731 00:13:20.401 [2024-11-18 13:29:50.294195] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.401 [2024-11-18 13:29:50.294208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:20.661 [2024-11-18 13:29:50.590964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:22.042 ************************************ 00:13:22.042 END TEST raid_rebuild_test_sb 00:13:22.042 ************************************ 00:13:22.042 00:13:22.042 real 0m23.148s 00:13:22.042 user 0m27.750s 00:13:22.042 sys 0m3.767s 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.042 13:29:51 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:22.042 13:29:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:22.042 13:29:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.042 13:29:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.042 ************************************ 00:13:22.042 START TEST raid_rebuild_test_io 00:13:22.042 ************************************ 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:22.042 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76461 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76461 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76461 ']' 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.043 13:29:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.043 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:22.043 Zero copy mechanism will not be used. 00:13:22.043 [2024-11-18 13:29:51.849551] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:22.043 [2024-11-18 13:29:51.849656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76461 ] 00:13:22.043 [2024-11-18 13:29:52.020933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.302 [2024-11-18 13:29:52.135995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.302 [2024-11-18 13:29:52.339811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.302 [2024-11-18 13:29:52.339843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 BaseBdev1_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 [2024-11-18 13:29:52.720732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.873 [2024-11-18 13:29:52.720809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.873 [2024-11-18 13:29:52.720833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.873 [2024-11-18 13:29:52.720845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.873 [2024-11-18 13:29:52.722918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.873 [2024-11-18 13:29:52.723041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.873 BaseBdev1 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 BaseBdev2_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 [2024-11-18 13:29:52.776119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.873 [2024-11-18 13:29:52.776194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.873 [2024-11-18 13:29:52.776213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.873 [2024-11-18 13:29:52.776224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.873 [2024-11-18 13:29:52.778191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.873 [2024-11-18 13:29:52.778228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.873 BaseBdev2 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 spare_malloc 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 spare_delay 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 [2024-11-18 13:29:52.855359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.873 [2024-11-18 13:29:52.855503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.873 [2024-11-18 13:29:52.855541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:22.873 [2024-11-18 13:29:52.855574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.873 [2024-11-18 13:29:52.857663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.873 [2024-11-18 13:29:52.857739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.873 spare 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 [2024-11-18 13:29:52.867410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.873 [2024-11-18 13:29:52.869197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.873 [2024-11-18 13:29:52.869274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.873 [2024-11-18 13:29:52.869287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.873 [2024-11-18 13:29:52.869510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.873 [2024-11-18 13:29:52.869646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.873 [2024-11-18 13:29:52.869656] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.873 [2024-11-18 13:29:52.869795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.133 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.133 "name": "raid_bdev1", 00:13:23.133 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:23.133 "strip_size_kb": 0, 00:13:23.133 "state": "online", 00:13:23.133 "raid_level": "raid1", 00:13:23.133 "superblock": false, 00:13:23.133 "num_base_bdevs": 2, 00:13:23.133 "num_base_bdevs_discovered": 2, 00:13:23.133 "num_base_bdevs_operational": 2, 00:13:23.133 "base_bdevs_list": [ 00:13:23.133 { 00:13:23.133 "name": "BaseBdev1", 00:13:23.133 "uuid": "55037347-a066-5ea0-855a-2230998a7626", 00:13:23.133 "is_configured": true, 00:13:23.133 "data_offset": 0, 00:13:23.133 "data_size": 65536 00:13:23.133 }, 00:13:23.133 { 00:13:23.133 "name": "BaseBdev2", 00:13:23.133 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:23.133 "is_configured": true, 00:13:23.133 "data_offset": 0, 00:13:23.133 "data_size": 65536 00:13:23.133 } 00:13:23.133 ] 00:13:23.133 }' 00:13:23.133 13:29:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.133 13:29:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.393 [2024-11-18 13:29:53.307049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.393 [2024-11-18 13:29:53.406569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.393 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.394 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.654 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.654 "name": "raid_bdev1", 00:13:23.654 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:23.654 "strip_size_kb": 0, 00:13:23.654 "state": "online", 00:13:23.654 "raid_level": "raid1", 00:13:23.654 "superblock": false, 00:13:23.654 "num_base_bdevs": 2, 00:13:23.654 "num_base_bdevs_discovered": 1, 00:13:23.654 "num_base_bdevs_operational": 1, 00:13:23.654 "base_bdevs_list": [ 00:13:23.654 { 00:13:23.654 "name": null, 00:13:23.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.654 "is_configured": false, 00:13:23.654 "data_offset": 0, 00:13:23.654 "data_size": 65536 00:13:23.654 }, 00:13:23.654 { 00:13:23.654 "name": "BaseBdev2", 00:13:23.654 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:23.654 "is_configured": true, 00:13:23.654 "data_offset": 0, 00:13:23.654 "data_size": 65536 00:13:23.654 } 00:13:23.654 ] 00:13:23.654 }' 00:13:23.654 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.654 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.654 [2024-11-18 13:29:53.502532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:23.654 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.654 Zero copy mechanism will not be used. 00:13:23.654 Running I/O for 60 seconds... 00:13:23.913 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.913 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.913 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.913 [2024-11-18 13:29:53.836309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.913 13:29:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.913 13:29:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:23.913 [2024-11-18 13:29:53.872272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:23.913 [2024-11-18 13:29:53.874110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.173 [2024-11-18 13:29:53.980945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.173 [2024-11-18 13:29:53.981657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.173 [2024-11-18 13:29:54.193679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.173 [2024-11-18 13:29:54.194115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.743 169.00 IOPS, 507.00 MiB/s [2024-11-18T13:29:54.797Z] [2024-11-18 13:29:54.527032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.743 [2024-11-18 13:29:54.767815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.003 "name": "raid_bdev1", 00:13:25.003 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:25.003 "strip_size_kb": 0, 00:13:25.003 "state": "online", 00:13:25.003 "raid_level": "raid1", 00:13:25.003 "superblock": false, 00:13:25.003 "num_base_bdevs": 2, 00:13:25.003 "num_base_bdevs_discovered": 2, 00:13:25.003 "num_base_bdevs_operational": 2, 00:13:25.003 "process": { 00:13:25.003 "type": "rebuild", 00:13:25.003 "target": "spare", 00:13:25.003 "progress": { 00:13:25.003 "blocks": 10240, 00:13:25.003 "percent": 15 00:13:25.003 } 00:13:25.003 }, 00:13:25.003 "base_bdevs_list": [ 00:13:25.003 { 00:13:25.003 "name": "spare", 00:13:25.003 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:25.003 "is_configured": true, 00:13:25.003 "data_offset": 0, 00:13:25.003 "data_size": 65536 00:13:25.003 }, 00:13:25.003 { 00:13:25.003 "name": "BaseBdev2", 00:13:25.003 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:25.003 "is_configured": true, 00:13:25.003 "data_offset": 0, 00:13:25.003 "data_size": 65536 00:13:25.003 } 00:13:25.003 ] 00:13:25.003 }' 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.003 13:29:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.003 [2024-11-18 13:29:54.989945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.264 [2024-11-18 13:29:55.109023] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.264 [2024-11-18 13:29:55.111627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.264 [2024-11-18 13:29:55.111721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.264 [2024-11-18 13:29:55.111751] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.264 [2024-11-18 13:29:55.158282] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.264 "name": "raid_bdev1", 00:13:25.264 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:25.264 "strip_size_kb": 0, 00:13:25.264 "state": "online", 00:13:25.264 "raid_level": "raid1", 00:13:25.264 "superblock": false, 00:13:25.264 "num_base_bdevs": 2, 00:13:25.264 "num_base_bdevs_discovered": 1, 00:13:25.264 "num_base_bdevs_operational": 1, 00:13:25.264 "base_bdevs_list": [ 00:13:25.264 { 00:13:25.264 "name": null, 00:13:25.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.264 "is_configured": false, 00:13:25.264 "data_offset": 0, 00:13:25.264 "data_size": 65536 00:13:25.264 }, 00:13:25.264 { 00:13:25.264 "name": "BaseBdev2", 00:13:25.264 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:25.264 "is_configured": true, 00:13:25.264 "data_offset": 0, 00:13:25.264 "data_size": 65536 00:13:25.264 } 00:13:25.264 ] 00:13:25.264 }' 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.264 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.784 172.50 IOPS, 517.50 MiB/s [2024-11-18T13:29:55.838Z] 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.784 "name": "raid_bdev1", 00:13:25.784 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:25.784 "strip_size_kb": 0, 00:13:25.784 "state": "online", 00:13:25.784 "raid_level": "raid1", 00:13:25.784 "superblock": false, 00:13:25.784 "num_base_bdevs": 2, 00:13:25.784 "num_base_bdevs_discovered": 1, 00:13:25.784 "num_base_bdevs_operational": 1, 00:13:25.784 "base_bdevs_list": [ 00:13:25.784 { 00:13:25.784 "name": null, 00:13:25.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.784 "is_configured": false, 00:13:25.784 "data_offset": 0, 00:13:25.784 "data_size": 65536 00:13:25.784 }, 00:13:25.784 { 00:13:25.784 "name": "BaseBdev2", 00:13:25.784 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:25.784 "is_configured": true, 00:13:25.784 "data_offset": 0, 00:13:25.784 "data_size": 65536 00:13:25.784 } 00:13:25.784 ] 00:13:25.784 }' 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.784 [2024-11-18 13:29:55.744924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.784 13:29:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:25.784 [2024-11-18 13:29:55.819665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:25.784 [2024-11-18 13:29:55.821569] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.044 [2024-11-18 13:29:55.928802] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.044 [2024-11-18 13:29:55.929492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.304 [2024-11-18 13:29:56.136574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.304 [2024-11-18 13:29:56.136964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.564 [2024-11-18 13:29:56.461389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:26.564 [2024-11-18 13:29:56.462034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:26.825 158.67 IOPS, 476.00 MiB/s [2024-11-18T13:29:56.879Z] [2024-11-18 13:29:56.671268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:26.825 [2024-11-18 13:29:56.671698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.825 "name": "raid_bdev1", 00:13:26.825 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:26.825 "strip_size_kb": 0, 00:13:26.825 "state": "online", 00:13:26.825 "raid_level": "raid1", 00:13:26.825 "superblock": false, 00:13:26.825 "num_base_bdevs": 2, 00:13:26.825 "num_base_bdevs_discovered": 2, 00:13:26.825 "num_base_bdevs_operational": 2, 00:13:26.825 "process": { 00:13:26.825 "type": "rebuild", 00:13:26.825 "target": "spare", 00:13:26.825 "progress": { 00:13:26.825 "blocks": 12288, 00:13:26.825 "percent": 18 00:13:26.825 } 00:13:26.825 }, 00:13:26.825 "base_bdevs_list": [ 00:13:26.825 { 00:13:26.825 "name": "spare", 00:13:26.825 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:26.825 "is_configured": true, 00:13:26.825 "data_offset": 0, 00:13:26.825 "data_size": 65536 00:13:26.825 }, 00:13:26.825 { 00:13:26.825 "name": "BaseBdev2", 00:13:26.825 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:26.825 "is_configured": true, 00:13:26.825 "data_offset": 0, 00:13:26.825 "data_size": 65536 00:13:26.825 } 00:13:26.825 ] 00:13:26.825 }' 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.825 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.085 [2024-11-18 13:29:56.888025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:27.085 [2024-11-18 13:29:56.888626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=410 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.085 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.085 "name": "raid_bdev1", 00:13:27.085 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:27.085 "strip_size_kb": 0, 00:13:27.085 "state": "online", 00:13:27.085 "raid_level": "raid1", 00:13:27.085 "superblock": false, 00:13:27.085 "num_base_bdevs": 2, 00:13:27.085 "num_base_bdevs_discovered": 2, 00:13:27.085 "num_base_bdevs_operational": 2, 00:13:27.085 "process": { 00:13:27.085 "type": "rebuild", 00:13:27.085 "target": "spare", 00:13:27.085 "progress": { 00:13:27.085 "blocks": 14336, 00:13:27.085 "percent": 21 00:13:27.085 } 00:13:27.085 }, 00:13:27.085 "base_bdevs_list": [ 00:13:27.085 { 00:13:27.085 "name": "spare", 00:13:27.085 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:27.086 "is_configured": true, 00:13:27.086 "data_offset": 0, 00:13:27.086 "data_size": 65536 00:13:27.086 }, 00:13:27.086 { 00:13:27.086 "name": "BaseBdev2", 00:13:27.086 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:27.086 "is_configured": true, 00:13:27.086 "data_offset": 0, 00:13:27.086 "data_size": 65536 00:13:27.086 } 00:13:27.086 ] 00:13:27.086 }' 00:13:27.086 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.086 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.086 13:29:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.086 13:29:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.086 13:29:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.086 [2024-11-18 13:29:57.110811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:27.656 [2024-11-18 13:29:57.434222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:27.656 [2024-11-18 13:29:57.434783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:27.656 129.75 IOPS, 389.25 MiB/s [2024-11-18T13:29:57.710Z] [2024-11-18 13:29:57.649691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.224 "name": "raid_bdev1", 00:13:28.224 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:28.224 "strip_size_kb": 0, 00:13:28.224 "state": "online", 00:13:28.224 "raid_level": "raid1", 00:13:28.224 "superblock": false, 00:13:28.224 "num_base_bdevs": 2, 00:13:28.224 "num_base_bdevs_discovered": 2, 00:13:28.224 "num_base_bdevs_operational": 2, 00:13:28.224 "process": { 00:13:28.224 "type": "rebuild", 00:13:28.224 "target": "spare", 00:13:28.224 "progress": { 00:13:28.224 "blocks": 28672, 00:13:28.224 "percent": 43 00:13:28.224 } 00:13:28.224 }, 00:13:28.224 "base_bdevs_list": [ 00:13:28.224 { 00:13:28.224 "name": "spare", 00:13:28.224 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:28.224 "is_configured": true, 00:13:28.224 "data_offset": 0, 00:13:28.224 "data_size": 65536 00:13:28.224 }, 00:13:28.224 { 00:13:28.224 "name": "BaseBdev2", 00:13:28.224 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:28.224 "is_configured": true, 00:13:28.224 "data_offset": 0, 00:13:28.224 "data_size": 65536 00:13:28.224 } 00:13:28.224 ] 00:13:28.224 }' 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.224 13:29:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.483 116.60 IOPS, 349.80 MiB/s [2024-11-18T13:29:58.537Z] [2024-11-18 13:29:58.519654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:29.421 [2024-11-18 13:29:59.141368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.422 "name": "raid_bdev1", 00:13:29.422 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:29.422 "strip_size_kb": 0, 00:13:29.422 "state": "online", 00:13:29.422 "raid_level": "raid1", 00:13:29.422 "superblock": false, 00:13:29.422 "num_base_bdevs": 2, 00:13:29.422 "num_base_bdevs_discovered": 2, 00:13:29.422 "num_base_bdevs_operational": 2, 00:13:29.422 "process": { 00:13:29.422 "type": "rebuild", 00:13:29.422 "target": "spare", 00:13:29.422 "progress": { 00:13:29.422 "blocks": 51200, 00:13:29.422 "percent": 78 00:13:29.422 } 00:13:29.422 }, 00:13:29.422 "base_bdevs_list": [ 00:13:29.422 { 00:13:29.422 "name": "spare", 00:13:29.422 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:29.422 "is_configured": true, 00:13:29.422 "data_offset": 0, 00:13:29.422 "data_size": 65536 00:13:29.422 }, 00:13:29.422 { 00:13:29.422 "name": "BaseBdev2", 00:13:29.422 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:29.422 "is_configured": true, 00:13:29.422 "data_offset": 0, 00:13:29.422 "data_size": 65536 00:13:29.422 } 00:13:29.422 ] 00:13:29.422 }' 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.422 [2024-11-18 13:29:59.247869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.422 13:29:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.681 103.67 IOPS, 311.00 MiB/s [2024-11-18T13:29:59.735Z] [2024-11-18 13:29:59.563305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:29.681 [2024-11-18 13:29:59.664734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:29.681 [2024-11-18 13:29:59.665044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:29.941 [2024-11-18 13:29:59.986435] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.200 [2024-11-18 13:30:00.086211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.200 [2024-11-18 13:30:00.088006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.460 "name": "raid_bdev1", 00:13:30.460 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:30.460 "strip_size_kb": 0, 00:13:30.460 "state": "online", 00:13:30.460 "raid_level": "raid1", 00:13:30.460 "superblock": false, 00:13:30.460 "num_base_bdevs": 2, 00:13:30.460 "num_base_bdevs_discovered": 2, 00:13:30.460 "num_base_bdevs_operational": 2, 00:13:30.460 "base_bdevs_list": [ 00:13:30.460 { 00:13:30.460 "name": "spare", 00:13:30.460 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:30.460 "is_configured": true, 00:13:30.460 "data_offset": 0, 00:13:30.460 "data_size": 65536 00:13:30.460 }, 00:13:30.460 { 00:13:30.460 "name": "BaseBdev2", 00:13:30.460 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:30.460 "is_configured": true, 00:13:30.460 "data_offset": 0, 00:13:30.460 "data_size": 65536 00:13:30.460 } 00:13:30.460 ] 00:13:30.460 }' 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.460 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.460 93.43 IOPS, 280.29 MiB/s [2024-11-18T13:30:00.514Z] 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.725 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.726 "name": "raid_bdev1", 00:13:30.726 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:30.726 "strip_size_kb": 0, 00:13:30.726 "state": "online", 00:13:30.726 "raid_level": "raid1", 00:13:30.726 "superblock": false, 00:13:30.726 "num_base_bdevs": 2, 00:13:30.726 "num_base_bdevs_discovered": 2, 00:13:30.726 "num_base_bdevs_operational": 2, 00:13:30.726 "base_bdevs_list": [ 00:13:30.726 { 00:13:30.726 "name": "spare", 00:13:30.726 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:30.726 "is_configured": true, 00:13:30.726 "data_offset": 0, 00:13:30.726 "data_size": 65536 00:13:30.726 }, 00:13:30.726 { 00:13:30.726 "name": "BaseBdev2", 00:13:30.726 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:30.726 "is_configured": true, 00:13:30.726 "data_offset": 0, 00:13:30.726 "data_size": 65536 00:13:30.726 } 00:13:30.726 ] 00:13:30.726 }' 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.726 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.727 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.727 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.727 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.727 "name": "raid_bdev1", 00:13:30.727 "uuid": "170568ea-3eb5-4038-bf71-7a2197f3234f", 00:13:30.727 "strip_size_kb": 0, 00:13:30.727 "state": "online", 00:13:30.727 "raid_level": "raid1", 00:13:30.727 "superblock": false, 00:13:30.727 "num_base_bdevs": 2, 00:13:30.727 "num_base_bdevs_discovered": 2, 00:13:30.727 "num_base_bdevs_operational": 2, 00:13:30.727 "base_bdevs_list": [ 00:13:30.727 { 00:13:30.727 "name": "spare", 00:13:30.727 "uuid": "17b9a51f-d556-51a8-9eef-b501b99f5d3a", 00:13:30.727 "is_configured": true, 00:13:30.727 "data_offset": 0, 00:13:30.727 "data_size": 65536 00:13:30.727 }, 00:13:30.727 { 00:13:30.727 "name": "BaseBdev2", 00:13:30.727 "uuid": "170b3f62-15b9-5c53-b2d0-3e15665be1c4", 00:13:30.727 "is_configured": true, 00:13:30.727 "data_offset": 0, 00:13:30.727 "data_size": 65536 00:13:30.727 } 00:13:30.727 ] 00:13:30.727 }' 00:13:30.727 13:30:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.727 13:30:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.302 [2024-11-18 13:30:01.106459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.302 [2024-11-18 13:30:01.106577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.302 00:13:31.302 Latency(us) 00:13:31.302 [2024-11-18T13:30:01.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.302 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:31.302 raid_bdev1 : 7.71 87.70 263.09 0.00 0.00 14887.56 307.65 130957.53 00:13:31.302 [2024-11-18T13:30:01.356Z] =================================================================================================================== 00:13:31.302 [2024-11-18T13:30:01.356Z] Total : 87.70 263.09 0.00 0.00 14887.56 307.65 130957.53 00:13:31.302 [2024-11-18 13:30:01.218822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.302 [2024-11-18 13:30:01.218866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.302 [2024-11-18 13:30:01.218939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.302 [2024-11-18 13:30:01.218949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.302 { 00:13:31.302 "results": [ 00:13:31.302 { 00:13:31.302 "job": "raid_bdev1", 00:13:31.302 "core_mask": "0x1", 00:13:31.302 "workload": "randrw", 00:13:31.302 "percentage": 50, 00:13:31.302 "status": "finished", 00:13:31.302 "queue_depth": 2, 00:13:31.302 "io_size": 3145728, 00:13:31.302 "runtime": 7.708487, 00:13:31.302 "iops": 87.69554907467574, 00:13:31.302 "mibps": 263.0866472240272, 00:13:31.302 "io_failed": 0, 00:13:31.302 "io_timeout": 0, 00:13:31.302 "avg_latency_us": 14887.557923567869, 00:13:31.302 "min_latency_us": 307.6471615720524, 00:13:31.302 "max_latency_us": 130957.52663755459 00:13:31.302 } 00:13:31.302 ], 00:13:31.302 "core_count": 1 00:13:31.302 } 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.302 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:31.561 /dev/nbd0 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.561 1+0 records in 00:13:31.561 1+0 records out 00:13:31.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243304 s, 16.8 MB/s 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.561 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:31.821 /dev/nbd1 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.821 1+0 records in 00:13:31.821 1+0 records out 00:13:31.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586026 s, 7.0 MB/s 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.821 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.079 13:30:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76461 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76461 ']' 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76461 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.338 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76461 00:13:32.598 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.598 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.598 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76461' 00:13:32.598 killing process with pid 76461 00:13:32.598 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76461 00:13:32.598 Received shutdown signal, test time was about 8.929944 seconds 00:13:32.598 00:13:32.598 Latency(us) 00:13:32.598 [2024-11-18T13:30:02.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.598 [2024-11-18T13:30:02.652Z] =================================================================================================================== 00:13:32.598 [2024-11-18T13:30:02.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.598 [2024-11-18 13:30:02.417336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.598 13:30:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76461 00:13:32.598 [2024-11-18 13:30:02.643599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.980 00:13:33.980 real 0m12.028s 00:13:33.980 user 0m15.067s 00:13:33.980 sys 0m1.487s 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.980 ************************************ 00:13:33.980 END TEST raid_rebuild_test_io 00:13:33.980 ************************************ 00:13:33.980 13:30:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:33.980 13:30:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.980 13:30:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.980 13:30:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.980 ************************************ 00:13:33.980 START TEST raid_rebuild_test_sb_io 00:13:33.980 ************************************ 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76837 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76837 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76837 ']' 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.980 13:30:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.980 [2024-11-18 13:30:03.958918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:33.980 [2024-11-18 13:30:03.959045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76837 ] 00:13:33.980 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.980 Zero copy mechanism will not be used. 00:13:34.240 [2024-11-18 13:30:04.137939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.240 [2024-11-18 13:30:04.245532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.499 [2024-11-18 13:30:04.431559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.499 [2024-11-18 13:30:04.431618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.758 BaseBdev1_malloc 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.758 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.017 [2024-11-18 13:30:04.814871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.017 [2024-11-18 13:30:04.814942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.017 [2024-11-18 13:30:04.814969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.017 [2024-11-18 13:30:04.814984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.017 [2024-11-18 13:30:04.817163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.017 [2024-11-18 13:30:04.817198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.017 BaseBdev1 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.017 BaseBdev2_malloc 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.017 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.017 [2024-11-18 13:30:04.870627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.017 [2024-11-18 13:30:04.870698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.017 [2024-11-18 13:30:04.870738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.017 [2024-11-18 13:30:04.870753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.017 [2024-11-18 13:30:04.872934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.017 [2024-11-18 13:30:04.872972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.017 BaseBdev2 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 spare_malloc 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 spare_delay 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 [2024-11-18 13:30:04.948558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.018 [2024-11-18 13:30:04.948613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.018 [2024-11-18 13:30:04.948632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:35.018 [2024-11-18 13:30:04.948642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.018 [2024-11-18 13:30:04.950645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.018 [2024-11-18 13:30:04.950689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.018 spare 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 [2024-11-18 13:30:04.960599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.018 [2024-11-18 13:30:04.962315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.018 [2024-11-18 13:30:04.962486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.018 [2024-11-18 13:30:04.962502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.018 [2024-11-18 13:30:04.962733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:35.018 [2024-11-18 13:30:04.962913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.018 [2024-11-18 13:30:04.962930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.018 [2024-11-18 13:30:04.963059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.018 13:30:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.018 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.018 "name": "raid_bdev1", 00:13:35.018 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:35.018 "strip_size_kb": 0, 00:13:35.018 "state": "online", 00:13:35.018 "raid_level": "raid1", 00:13:35.018 "superblock": true, 00:13:35.018 "num_base_bdevs": 2, 00:13:35.018 "num_base_bdevs_discovered": 2, 00:13:35.018 "num_base_bdevs_operational": 2, 00:13:35.018 "base_bdevs_list": [ 00:13:35.018 { 00:13:35.018 "name": "BaseBdev1", 00:13:35.018 "uuid": "39135b64-efdc-53d8-a273-0cc4eb3fe8fa", 00:13:35.018 "is_configured": true, 00:13:35.018 "data_offset": 2048, 00:13:35.018 "data_size": 63488 00:13:35.018 }, 00:13:35.018 { 00:13:35.018 "name": "BaseBdev2", 00:13:35.018 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:35.018 "is_configured": true, 00:13:35.018 "data_offset": 2048, 00:13:35.018 "data_size": 63488 00:13:35.018 } 00:13:35.018 ] 00:13:35.018 }' 00:13:35.018 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.018 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.587 [2024-11-18 13:30:05.420118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 [2024-11-18 13:30:05.515629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.587 "name": "raid_bdev1", 00:13:35.587 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:35.587 "strip_size_kb": 0, 00:13:35.587 "state": "online", 00:13:35.587 "raid_level": "raid1", 00:13:35.587 "superblock": true, 00:13:35.587 "num_base_bdevs": 2, 00:13:35.587 "num_base_bdevs_discovered": 1, 00:13:35.587 "num_base_bdevs_operational": 1, 00:13:35.587 "base_bdevs_list": [ 00:13:35.587 { 00:13:35.587 "name": null, 00:13:35.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.587 "is_configured": false, 00:13:35.587 "data_offset": 0, 00:13:35.587 "data_size": 63488 00:13:35.587 }, 00:13:35.587 { 00:13:35.587 "name": "BaseBdev2", 00:13:35.587 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:35.587 "is_configured": true, 00:13:35.587 "data_offset": 2048, 00:13:35.587 "data_size": 63488 00:13:35.587 } 00:13:35.587 ] 00:13:35.587 }' 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.587 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.846 [2024-11-18 13:30:05.643708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:35.846 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.846 Zero copy mechanism will not be used. 00:13:35.847 Running I/O for 60 seconds... 00:13:36.107 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.107 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.107 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.107 [2024-11-18 13:30:05.947917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.107 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.107 13:30:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.107 [2024-11-18 13:30:06.003958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.107 [2024-11-18 13:30:06.005831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.107 [2024-11-18 13:30:06.114171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.107 [2024-11-18 13:30:06.114814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.367 [2024-11-18 13:30:06.324006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.367 [2024-11-18 13:30:06.324349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.626 154.00 IOPS, 462.00 MiB/s [2024-11-18T13:30:06.680Z] [2024-11-18 13:30:06.647959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.193 13:30:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.193 "name": "raid_bdev1", 00:13:37.193 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:37.193 "strip_size_kb": 0, 00:13:37.193 "state": "online", 00:13:37.193 "raid_level": "raid1", 00:13:37.193 "superblock": true, 00:13:37.193 "num_base_bdevs": 2, 00:13:37.193 "num_base_bdevs_discovered": 2, 00:13:37.193 "num_base_bdevs_operational": 2, 00:13:37.193 "process": { 00:13:37.193 "type": "rebuild", 00:13:37.193 "target": "spare", 00:13:37.193 "progress": { 00:13:37.193 "blocks": 14336, 00:13:37.193 "percent": 22 00:13:37.193 } 00:13:37.193 }, 00:13:37.193 "base_bdevs_list": [ 00:13:37.193 { 00:13:37.193 "name": "spare", 00:13:37.193 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:37.193 "is_configured": true, 00:13:37.193 "data_offset": 2048, 00:13:37.193 "data_size": 63488 00:13:37.193 }, 00:13:37.193 { 00:13:37.193 "name": "BaseBdev2", 00:13:37.193 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:37.193 "is_configured": true, 00:13:37.193 "data_offset": 2048, 00:13:37.193 "data_size": 63488 00:13:37.193 } 00:13:37.193 ] 00:13:37.193 }' 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.193 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.193 [2024-11-18 13:30:07.122746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.453 [2024-11-18 13:30:07.248699] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.453 [2024-11-18 13:30:07.257380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.453 [2024-11-18 13:30:07.257496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.453 [2024-11-18 13:30:07.257525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.453 [2024-11-18 13:30:07.300526] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.453 "name": "raid_bdev1", 00:13:37.453 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:37.453 "strip_size_kb": 0, 00:13:37.453 "state": "online", 00:13:37.453 "raid_level": "raid1", 00:13:37.453 "superblock": true, 00:13:37.453 "num_base_bdevs": 2, 00:13:37.453 "num_base_bdevs_discovered": 1, 00:13:37.453 "num_base_bdevs_operational": 1, 00:13:37.453 "base_bdevs_list": [ 00:13:37.453 { 00:13:37.453 "name": null, 00:13:37.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.453 "is_configured": false, 00:13:37.453 "data_offset": 0, 00:13:37.453 "data_size": 63488 00:13:37.453 }, 00:13:37.453 { 00:13:37.453 "name": "BaseBdev2", 00:13:37.453 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:37.453 "is_configured": true, 00:13:37.453 "data_offset": 2048, 00:13:37.453 "data_size": 63488 00:13:37.453 } 00:13:37.453 ] 00:13:37.453 }' 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.453 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.712 156.00 IOPS, 468.00 MiB/s [2024-11-18T13:30:07.766Z] 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.712 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.972 "name": "raid_bdev1", 00:13:37.972 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:37.972 "strip_size_kb": 0, 00:13:37.972 "state": "online", 00:13:37.972 "raid_level": "raid1", 00:13:37.972 "superblock": true, 00:13:37.972 "num_base_bdevs": 2, 00:13:37.972 "num_base_bdevs_discovered": 1, 00:13:37.972 "num_base_bdevs_operational": 1, 00:13:37.972 "base_bdevs_list": [ 00:13:37.972 { 00:13:37.972 "name": null, 00:13:37.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.972 "is_configured": false, 00:13:37.972 "data_offset": 0, 00:13:37.972 "data_size": 63488 00:13:37.972 }, 00:13:37.972 { 00:13:37.972 "name": "BaseBdev2", 00:13:37.972 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:37.972 "is_configured": true, 00:13:37.972 "data_offset": 2048, 00:13:37.972 "data_size": 63488 00:13:37.972 } 00:13:37.972 ] 00:13:37.972 }' 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.972 [2024-11-18 13:30:07.909817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.972 13:30:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.972 [2024-11-18 13:30:07.964751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:37.972 [2024-11-18 13:30:07.966855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.232 [2024-11-18 13:30:08.080244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.232 [2024-11-18 13:30:08.080796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:38.232 [2024-11-18 13:30:08.201502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.232 [2024-11-18 13:30:08.201828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.492 [2024-11-18 13:30:08.541500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:39.012 162.67 IOPS, 488.00 MiB/s [2024-11-18T13:30:09.066Z] 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.012 [2024-11-18 13:30:08.950323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.012 13:30:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.012 "name": "raid_bdev1", 00:13:39.012 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:39.012 "strip_size_kb": 0, 00:13:39.012 "state": "online", 00:13:39.012 "raid_level": "raid1", 00:13:39.012 "superblock": true, 00:13:39.012 "num_base_bdevs": 2, 00:13:39.012 "num_base_bdevs_discovered": 2, 00:13:39.012 "num_base_bdevs_operational": 2, 00:13:39.012 "process": { 00:13:39.012 "type": "rebuild", 00:13:39.012 "target": "spare", 00:13:39.012 "progress": { 00:13:39.012 "blocks": 14336, 00:13:39.012 "percent": 22 00:13:39.012 } 00:13:39.012 }, 00:13:39.012 "base_bdevs_list": [ 00:13:39.012 { 00:13:39.012 "name": "spare", 00:13:39.012 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:39.012 "is_configured": true, 00:13:39.012 "data_offset": 2048, 00:13:39.012 "data_size": 63488 00:13:39.012 }, 00:13:39.012 { 00:13:39.012 "name": "BaseBdev2", 00:13:39.012 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:39.012 "is_configured": true, 00:13:39.012 "data_offset": 2048, 00:13:39.013 "data_size": 63488 00:13:39.013 } 00:13:39.013 ] 00:13:39.013 }' 00:13:39.013 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.013 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.013 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:39.273 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=423 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.273 "name": "raid_bdev1", 00:13:39.273 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:39.273 "strip_size_kb": 0, 00:13:39.273 "state": "online", 00:13:39.273 "raid_level": "raid1", 00:13:39.273 "superblock": true, 00:13:39.273 "num_base_bdevs": 2, 00:13:39.273 "num_base_bdevs_discovered": 2, 00:13:39.273 "num_base_bdevs_operational": 2, 00:13:39.273 "process": { 00:13:39.273 "type": "rebuild", 00:13:39.273 "target": "spare", 00:13:39.273 "progress": { 00:13:39.273 "blocks": 14336, 00:13:39.273 "percent": 22 00:13:39.273 } 00:13:39.273 }, 00:13:39.273 "base_bdevs_list": [ 00:13:39.273 { 00:13:39.273 "name": "spare", 00:13:39.273 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:39.273 "is_configured": true, 00:13:39.273 "data_offset": 2048, 00:13:39.273 "data_size": 63488 00:13:39.273 }, 00:13:39.273 { 00:13:39.273 "name": "BaseBdev2", 00:13:39.273 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:39.273 "is_configured": true, 00:13:39.273 "data_offset": 2048, 00:13:39.273 "data_size": 63488 00:13:39.273 } 00:13:39.273 ] 00:13:39.273 }' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.273 [2024-11-18 13:30:09.158346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.273 [2024-11-18 13:30:09.158772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.273 13:30:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.532 [2024-11-18 13:30:09.521788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:40.101 148.00 IOPS, 444.00 MiB/s [2024-11-18T13:30:10.155Z] [2024-11-18 13:30:09.843443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:40.101 [2024-11-18 13:30:09.843841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:40.101 [2024-11-18 13:30:10.049985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.362 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.362 "name": "raid_bdev1", 00:13:40.362 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:40.362 "strip_size_kb": 0, 00:13:40.362 "state": "online", 00:13:40.362 "raid_level": "raid1", 00:13:40.362 "superblock": true, 00:13:40.362 "num_base_bdevs": 2, 00:13:40.362 "num_base_bdevs_discovered": 2, 00:13:40.362 "num_base_bdevs_operational": 2, 00:13:40.362 "process": { 00:13:40.362 "type": "rebuild", 00:13:40.362 "target": "spare", 00:13:40.362 "progress": { 00:13:40.362 "blocks": 28672, 00:13:40.362 "percent": 45 00:13:40.362 } 00:13:40.362 }, 00:13:40.363 "base_bdevs_list": [ 00:13:40.363 { 00:13:40.363 "name": "spare", 00:13:40.363 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:40.363 "is_configured": true, 00:13:40.363 "data_offset": 2048, 00:13:40.363 "data_size": 63488 00:13:40.363 }, 00:13:40.363 { 00:13:40.363 "name": "BaseBdev2", 00:13:40.363 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:40.363 "is_configured": true, 00:13:40.363 "data_offset": 2048, 00:13:40.363 "data_size": 63488 00:13:40.363 } 00:13:40.363 ] 00:13:40.363 }' 00:13:40.363 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.363 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.363 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.363 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.363 13:30:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.649 [2024-11-18 13:30:10.484314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:40.917 130.60 IOPS, 391.80 MiB/s [2024-11-18T13:30:10.971Z] [2024-11-18 13:30:10.803363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:40.917 [2024-11-18 13:30:10.803826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:41.177 [2024-11-18 13:30:11.023529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:41.435 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.435 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.435 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.436 "name": "raid_bdev1", 00:13:41.436 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:41.436 "strip_size_kb": 0, 00:13:41.436 "state": "online", 00:13:41.436 "raid_level": "raid1", 00:13:41.436 "superblock": true, 00:13:41.436 "num_base_bdevs": 2, 00:13:41.436 "num_base_bdevs_discovered": 2, 00:13:41.436 "num_base_bdevs_operational": 2, 00:13:41.436 "process": { 00:13:41.436 "type": "rebuild", 00:13:41.436 "target": "spare", 00:13:41.436 "progress": { 00:13:41.436 "blocks": 47104, 00:13:41.436 "percent": 74 00:13:41.436 } 00:13:41.436 }, 00:13:41.436 "base_bdevs_list": [ 00:13:41.436 { 00:13:41.436 "name": "spare", 00:13:41.436 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:41.436 "is_configured": true, 00:13:41.436 "data_offset": 2048, 00:13:41.436 "data_size": 63488 00:13:41.436 }, 00:13:41.436 { 00:13:41.436 "name": "BaseBdev2", 00:13:41.436 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:41.436 "is_configured": true, 00:13:41.436 "data_offset": 2048, 00:13:41.436 "data_size": 63488 00:13:41.436 } 00:13:41.436 ] 00:13:41.436 }' 00:13:41.436 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.695 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.695 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.695 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.695 13:30:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.955 115.50 IOPS, 346.50 MiB/s [2024-11-18T13:30:12.009Z] [2024-11-18 13:30:11.899126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:42.215 [2024-11-18 13:30:12.224843] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.474 [2024-11-18 13:30:12.324651] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.474 [2024-11-18 13:30:12.326594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.735 "name": "raid_bdev1", 00:13:42.735 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:42.735 "strip_size_kb": 0, 00:13:42.735 "state": "online", 00:13:42.735 "raid_level": "raid1", 00:13:42.735 "superblock": true, 00:13:42.735 "num_base_bdevs": 2, 00:13:42.735 "num_base_bdevs_discovered": 2, 00:13:42.735 "num_base_bdevs_operational": 2, 00:13:42.735 "base_bdevs_list": [ 00:13:42.735 { 00:13:42.735 "name": "spare", 00:13:42.735 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:42.735 "is_configured": true, 00:13:42.735 "data_offset": 2048, 00:13:42.735 "data_size": 63488 00:13:42.735 }, 00:13:42.735 { 00:13:42.735 "name": "BaseBdev2", 00:13:42.735 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:42.735 "is_configured": true, 00:13:42.735 "data_offset": 2048, 00:13:42.735 "data_size": 63488 00:13:42.735 } 00:13:42.735 ] 00:13:42.735 }' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.735 102.71 IOPS, 308.14 MiB/s [2024-11-18T13:30:12.789Z] 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.735 "name": "raid_bdev1", 00:13:42.735 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:42.735 "strip_size_kb": 0, 00:13:42.735 "state": "online", 00:13:42.735 "raid_level": "raid1", 00:13:42.735 "superblock": true, 00:13:42.735 "num_base_bdevs": 2, 00:13:42.735 "num_base_bdevs_discovered": 2, 00:13:42.735 "num_base_bdevs_operational": 2, 00:13:42.735 "base_bdevs_list": [ 00:13:42.735 { 00:13:42.735 "name": "spare", 00:13:42.735 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:42.735 "is_configured": true, 00:13:42.735 "data_offset": 2048, 00:13:42.735 "data_size": 63488 00:13:42.735 }, 00:13:42.735 { 00:13:42.735 "name": "BaseBdev2", 00:13:42.735 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:42.735 "is_configured": true, 00:13:42.735 "data_offset": 2048, 00:13:42.735 "data_size": 63488 00:13:42.735 } 00:13:42.735 ] 00:13:42.735 }' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.735 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.996 "name": "raid_bdev1", 00:13:42.996 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:42.996 "strip_size_kb": 0, 00:13:42.996 "state": "online", 00:13:42.996 "raid_level": "raid1", 00:13:42.996 "superblock": true, 00:13:42.996 "num_base_bdevs": 2, 00:13:42.996 "num_base_bdevs_discovered": 2, 00:13:42.996 "num_base_bdevs_operational": 2, 00:13:42.996 "base_bdevs_list": [ 00:13:42.996 { 00:13:42.996 "name": "spare", 00:13:42.996 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:42.996 "is_configured": true, 00:13:42.996 "data_offset": 2048, 00:13:42.996 "data_size": 63488 00:13:42.996 }, 00:13:42.996 { 00:13:42.996 "name": "BaseBdev2", 00:13:42.996 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:42.996 "is_configured": true, 00:13:42.996 "data_offset": 2048, 00:13:42.996 "data_size": 63488 00:13:42.996 } 00:13:42.996 ] 00:13:42.996 }' 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.996 13:30:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.256 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.256 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.256 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.256 [2024-11-18 13:30:13.296918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.256 [2024-11-18 13:30:13.296957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.516 00:13:43.516 Latency(us) 00:13:43.516 [2024-11-18T13:30:13.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.516 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:43.516 raid_bdev1 : 7.77 95.81 287.42 0.00 0.00 14844.65 309.44 109894.43 00:13:43.516 [2024-11-18T13:30:13.570Z] =================================================================================================================== 00:13:43.516 [2024-11-18T13:30:13.570Z] Total : 95.81 287.42 0.00 0.00 14844.65 309.44 109894.43 00:13:43.516 [2024-11-18 13:30:13.417830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.516 [2024-11-18 13:30:13.417875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.516 [2024-11-18 13:30:13.417958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.516 [2024-11-18 13:30:13.417968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.516 { 00:13:43.516 "results": [ 00:13:43.516 { 00:13:43.516 "job": "raid_bdev1", 00:13:43.516 "core_mask": "0x1", 00:13:43.516 "workload": "randrw", 00:13:43.516 "percentage": 50, 00:13:43.516 "status": "finished", 00:13:43.516 "queue_depth": 2, 00:13:43.516 "io_size": 3145728, 00:13:43.516 "runtime": 7.765646, 00:13:43.516 "iops": 95.80658196368982, 00:13:43.516 "mibps": 287.41974589106945, 00:13:43.516 "io_failed": 0, 00:13:43.516 "io_timeout": 0, 00:13:43.516 "avg_latency_us": 14844.654702540263, 00:13:43.516 "min_latency_us": 309.435807860262, 00:13:43.516 "max_latency_us": 109894.42794759825 00:13:43.516 } 00:13:43.516 ], 00:13:43.516 "core_count": 1 00:13:43.516 } 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.516 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:43.776 /dev/nbd0 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.776 1+0 records in 00:13:43.776 1+0 records out 00:13:43.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398717 s, 10.3 MB/s 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.776 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:44.036 /dev/nbd1 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.036 1+0 records in 00:13:44.036 1+0 records out 00:13:44.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287207 s, 14.3 MB/s 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.036 13:30:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.296 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.556 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.816 [2024-11-18 13:30:14.660250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.816 [2024-11-18 13:30:14.660307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.816 [2024-11-18 13:30:14.660330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:44.816 [2024-11-18 13:30:14.660338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.816 [2024-11-18 13:30:14.662483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.816 [2024-11-18 13:30:14.662522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.816 [2024-11-18 13:30:14.662614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:44.816 [2024-11-18 13:30:14.662674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.816 [2024-11-18 13:30:14.662814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.816 spare 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.816 [2024-11-18 13:30:14.762713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:44.816 [2024-11-18 13:30:14.762757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.816 [2024-11-18 13:30:14.763068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:44.816 [2024-11-18 13:30:14.763277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:44.816 [2024-11-18 13:30:14.763289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:44.816 [2024-11-18 13:30:14.763473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.816 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.817 "name": "raid_bdev1", 00:13:44.817 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:44.817 "strip_size_kb": 0, 00:13:44.817 "state": "online", 00:13:44.817 "raid_level": "raid1", 00:13:44.817 "superblock": true, 00:13:44.817 "num_base_bdevs": 2, 00:13:44.817 "num_base_bdevs_discovered": 2, 00:13:44.817 "num_base_bdevs_operational": 2, 00:13:44.817 "base_bdevs_list": [ 00:13:44.817 { 00:13:44.817 "name": "spare", 00:13:44.817 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:44.817 "is_configured": true, 00:13:44.817 "data_offset": 2048, 00:13:44.817 "data_size": 63488 00:13:44.817 }, 00:13:44.817 { 00:13:44.817 "name": "BaseBdev2", 00:13:44.817 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:44.817 "is_configured": true, 00:13:44.817 "data_offset": 2048, 00:13:44.817 "data_size": 63488 00:13:44.817 } 00:13:44.817 ] 00:13:44.817 }' 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.817 13:30:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.387 "name": "raid_bdev1", 00:13:45.387 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:45.387 "strip_size_kb": 0, 00:13:45.387 "state": "online", 00:13:45.387 "raid_level": "raid1", 00:13:45.387 "superblock": true, 00:13:45.387 "num_base_bdevs": 2, 00:13:45.387 "num_base_bdevs_discovered": 2, 00:13:45.387 "num_base_bdevs_operational": 2, 00:13:45.387 "base_bdevs_list": [ 00:13:45.387 { 00:13:45.387 "name": "spare", 00:13:45.387 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:45.387 "is_configured": true, 00:13:45.387 "data_offset": 2048, 00:13:45.387 "data_size": 63488 00:13:45.387 }, 00:13:45.387 { 00:13:45.387 "name": "BaseBdev2", 00:13:45.387 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:45.387 "is_configured": true, 00:13:45.387 "data_offset": 2048, 00:13:45.387 "data_size": 63488 00:13:45.387 } 00:13:45.387 ] 00:13:45.387 }' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.387 [2024-11-18 13:30:15.367183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.387 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.387 "name": "raid_bdev1", 00:13:45.387 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:45.387 "strip_size_kb": 0, 00:13:45.387 "state": "online", 00:13:45.387 "raid_level": "raid1", 00:13:45.387 "superblock": true, 00:13:45.387 "num_base_bdevs": 2, 00:13:45.387 "num_base_bdevs_discovered": 1, 00:13:45.387 "num_base_bdevs_operational": 1, 00:13:45.387 "base_bdevs_list": [ 00:13:45.387 { 00:13:45.387 "name": null, 00:13:45.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.387 "is_configured": false, 00:13:45.387 "data_offset": 0, 00:13:45.387 "data_size": 63488 00:13:45.387 }, 00:13:45.387 { 00:13:45.387 "name": "BaseBdev2", 00:13:45.387 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:45.387 "is_configured": true, 00:13:45.387 "data_offset": 2048, 00:13:45.388 "data_size": 63488 00:13:45.388 } 00:13:45.388 ] 00:13:45.388 }' 00:13:45.388 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.388 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.984 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.984 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.984 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.984 [2024-11-18 13:30:15.798865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.984 [2024-11-18 13:30:15.799158] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.984 [2024-11-18 13:30:15.799226] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.984 [2024-11-18 13:30:15.799295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.984 [2024-11-18 13:30:15.815635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:45.984 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.984 13:30:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:45.984 [2024-11-18 13:30:15.817461] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.924 "name": "raid_bdev1", 00:13:46.924 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:46.924 "strip_size_kb": 0, 00:13:46.924 "state": "online", 00:13:46.924 "raid_level": "raid1", 00:13:46.924 "superblock": true, 00:13:46.924 "num_base_bdevs": 2, 00:13:46.924 "num_base_bdevs_discovered": 2, 00:13:46.924 "num_base_bdevs_operational": 2, 00:13:46.924 "process": { 00:13:46.924 "type": "rebuild", 00:13:46.924 "target": "spare", 00:13:46.924 "progress": { 00:13:46.924 "blocks": 20480, 00:13:46.924 "percent": 32 00:13:46.924 } 00:13:46.924 }, 00:13:46.924 "base_bdevs_list": [ 00:13:46.924 { 00:13:46.924 "name": "spare", 00:13:46.924 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:46.924 "is_configured": true, 00:13:46.924 "data_offset": 2048, 00:13:46.924 "data_size": 63488 00:13:46.924 }, 00:13:46.924 { 00:13:46.924 "name": "BaseBdev2", 00:13:46.924 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:46.924 "is_configured": true, 00:13:46.924 "data_offset": 2048, 00:13:46.924 "data_size": 63488 00:13:46.924 } 00:13:46.924 ] 00:13:46.924 }' 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.924 13:30:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.924 [2024-11-18 13:30:16.961327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.184 [2024-11-18 13:30:17.022831] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.184 [2024-11-18 13:30:17.022953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.184 [2024-11-18 13:30:17.022969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.184 [2024-11-18 13:30:17.022978] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.184 "name": "raid_bdev1", 00:13:47.184 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:47.184 "strip_size_kb": 0, 00:13:47.184 "state": "online", 00:13:47.184 "raid_level": "raid1", 00:13:47.184 "superblock": true, 00:13:47.184 "num_base_bdevs": 2, 00:13:47.184 "num_base_bdevs_discovered": 1, 00:13:47.184 "num_base_bdevs_operational": 1, 00:13:47.184 "base_bdevs_list": [ 00:13:47.184 { 00:13:47.184 "name": null, 00:13:47.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.184 "is_configured": false, 00:13:47.184 "data_offset": 0, 00:13:47.184 "data_size": 63488 00:13:47.184 }, 00:13:47.184 { 00:13:47.184 "name": "BaseBdev2", 00:13:47.184 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:47.184 "is_configured": true, 00:13:47.184 "data_offset": 2048, 00:13:47.184 "data_size": 63488 00:13:47.184 } 00:13:47.184 ] 00:13:47.184 }' 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.184 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.444 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.444 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.444 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.444 [2024-11-18 13:30:17.484304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.444 [2024-11-18 13:30:17.484477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.444 [2024-11-18 13:30:17.484522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:47.444 [2024-11-18 13:30:17.484557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.444 [2024-11-18 13:30:17.485045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.444 [2024-11-18 13:30:17.485110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.444 [2024-11-18 13:30:17.485248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:47.444 [2024-11-18 13:30:17.485300] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.444 [2024-11-18 13:30:17.485342] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:47.444 [2024-11-18 13:30:17.485390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.704 [2024-11-18 13:30:17.501792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:47.704 spare 00:13:47.704 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.704 [2024-11-18 13:30:17.503668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.704 13:30:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.644 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.644 "name": "raid_bdev1", 00:13:48.644 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:48.644 "strip_size_kb": 0, 00:13:48.644 "state": "online", 00:13:48.644 "raid_level": "raid1", 00:13:48.644 "superblock": true, 00:13:48.644 "num_base_bdevs": 2, 00:13:48.644 "num_base_bdevs_discovered": 2, 00:13:48.644 "num_base_bdevs_operational": 2, 00:13:48.644 "process": { 00:13:48.644 "type": "rebuild", 00:13:48.644 "target": "spare", 00:13:48.644 "progress": { 00:13:48.644 "blocks": 20480, 00:13:48.644 "percent": 32 00:13:48.644 } 00:13:48.644 }, 00:13:48.644 "base_bdevs_list": [ 00:13:48.644 { 00:13:48.644 "name": "spare", 00:13:48.644 "uuid": "ccded13f-5736-5b77-82a2-76baf37212ee", 00:13:48.644 "is_configured": true, 00:13:48.644 "data_offset": 2048, 00:13:48.644 "data_size": 63488 00:13:48.644 }, 00:13:48.644 { 00:13:48.645 "name": "BaseBdev2", 00:13:48.645 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:48.645 "is_configured": true, 00:13:48.645 "data_offset": 2048, 00:13:48.645 "data_size": 63488 00:13:48.645 } 00:13:48.645 ] 00:13:48.645 }' 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.645 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.645 [2024-11-18 13:30:18.667444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.905 [2024-11-18 13:30:18.708921] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:48.905 [2024-11-18 13:30:18.708977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.905 [2024-11-18 13:30:18.708993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:48.905 [2024-11-18 13:30:18.708999] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.905 "name": "raid_bdev1", 00:13:48.905 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:48.905 "strip_size_kb": 0, 00:13:48.905 "state": "online", 00:13:48.905 "raid_level": "raid1", 00:13:48.905 "superblock": true, 00:13:48.905 "num_base_bdevs": 2, 00:13:48.905 "num_base_bdevs_discovered": 1, 00:13:48.905 "num_base_bdevs_operational": 1, 00:13:48.905 "base_bdevs_list": [ 00:13:48.905 { 00:13:48.905 "name": null, 00:13:48.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.905 "is_configured": false, 00:13:48.905 "data_offset": 0, 00:13:48.905 "data_size": 63488 00:13:48.905 }, 00:13:48.905 { 00:13:48.905 "name": "BaseBdev2", 00:13:48.905 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:48.905 "is_configured": true, 00:13:48.905 "data_offset": 2048, 00:13:48.905 "data_size": 63488 00:13:48.905 } 00:13:48.905 ] 00:13:48.905 }' 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.905 13:30:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.165 "name": "raid_bdev1", 00:13:49.165 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:49.165 "strip_size_kb": 0, 00:13:49.165 "state": "online", 00:13:49.165 "raid_level": "raid1", 00:13:49.165 "superblock": true, 00:13:49.165 "num_base_bdevs": 2, 00:13:49.165 "num_base_bdevs_discovered": 1, 00:13:49.165 "num_base_bdevs_operational": 1, 00:13:49.165 "base_bdevs_list": [ 00:13:49.165 { 00:13:49.165 "name": null, 00:13:49.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.165 "is_configured": false, 00:13:49.165 "data_offset": 0, 00:13:49.165 "data_size": 63488 00:13:49.165 }, 00:13:49.165 { 00:13:49.165 "name": "BaseBdev2", 00:13:49.165 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:49.165 "is_configured": true, 00:13:49.165 "data_offset": 2048, 00:13:49.165 "data_size": 63488 00:13:49.165 } 00:13:49.165 ] 00:13:49.165 }' 00:13:49.165 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.424 [2024-11-18 13:30:19.294718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.424 [2024-11-18 13:30:19.294843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.424 [2024-11-18 13:30:19.294878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:49.424 [2024-11-18 13:30:19.294887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.424 [2024-11-18 13:30:19.295354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.424 [2024-11-18 13:30:19.295373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.424 [2024-11-18 13:30:19.295456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:49.424 [2024-11-18 13:30:19.295470] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.424 [2024-11-18 13:30:19.295479] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.424 [2024-11-18 13:30:19.295490] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:49.424 BaseBdev1 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.424 13:30:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.363 "name": "raid_bdev1", 00:13:50.363 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:50.363 "strip_size_kb": 0, 00:13:50.363 "state": "online", 00:13:50.363 "raid_level": "raid1", 00:13:50.363 "superblock": true, 00:13:50.363 "num_base_bdevs": 2, 00:13:50.363 "num_base_bdevs_discovered": 1, 00:13:50.363 "num_base_bdevs_operational": 1, 00:13:50.363 "base_bdevs_list": [ 00:13:50.363 { 00:13:50.363 "name": null, 00:13:50.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.363 "is_configured": false, 00:13:50.363 "data_offset": 0, 00:13:50.363 "data_size": 63488 00:13:50.363 }, 00:13:50.363 { 00:13:50.363 "name": "BaseBdev2", 00:13:50.363 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:50.363 "is_configured": true, 00:13:50.363 "data_offset": 2048, 00:13:50.363 "data_size": 63488 00:13:50.363 } 00:13:50.363 ] 00:13:50.363 }' 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.363 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.932 "name": "raid_bdev1", 00:13:50.932 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:50.932 "strip_size_kb": 0, 00:13:50.932 "state": "online", 00:13:50.932 "raid_level": "raid1", 00:13:50.932 "superblock": true, 00:13:50.932 "num_base_bdevs": 2, 00:13:50.932 "num_base_bdevs_discovered": 1, 00:13:50.932 "num_base_bdevs_operational": 1, 00:13:50.932 "base_bdevs_list": [ 00:13:50.932 { 00:13:50.932 "name": null, 00:13:50.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.932 "is_configured": false, 00:13:50.932 "data_offset": 0, 00:13:50.932 "data_size": 63488 00:13:50.932 }, 00:13:50.932 { 00:13:50.932 "name": "BaseBdev2", 00:13:50.932 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:50.932 "is_configured": true, 00:13:50.932 "data_offset": 2048, 00:13:50.932 "data_size": 63488 00:13:50.932 } 00:13:50.932 ] 00:13:50.932 }' 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.932 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.933 [2024-11-18 13:30:20.868170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.933 [2024-11-18 13:30:20.868327] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:50.933 [2024-11-18 13:30:20.868342] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:50.933 request: 00:13:50.933 { 00:13:50.933 "base_bdev": "BaseBdev1", 00:13:50.933 "raid_bdev": "raid_bdev1", 00:13:50.933 "method": "bdev_raid_add_base_bdev", 00:13:50.933 "req_id": 1 00:13:50.933 } 00:13:50.933 Got JSON-RPC error response 00:13:50.933 response: 00:13:50.933 { 00:13:50.933 "code": -22, 00:13:50.933 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:50.933 } 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:50.933 13:30:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:51.917 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.918 "name": "raid_bdev1", 00:13:51.918 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:51.918 "strip_size_kb": 0, 00:13:51.918 "state": "online", 00:13:51.918 "raid_level": "raid1", 00:13:51.918 "superblock": true, 00:13:51.918 "num_base_bdevs": 2, 00:13:51.918 "num_base_bdevs_discovered": 1, 00:13:51.918 "num_base_bdevs_operational": 1, 00:13:51.918 "base_bdevs_list": [ 00:13:51.918 { 00:13:51.918 "name": null, 00:13:51.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.918 "is_configured": false, 00:13:51.918 "data_offset": 0, 00:13:51.918 "data_size": 63488 00:13:51.918 }, 00:13:51.918 { 00:13:51.918 "name": "BaseBdev2", 00:13:51.918 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:51.918 "is_configured": true, 00:13:51.918 "data_offset": 2048, 00:13:51.918 "data_size": 63488 00:13:51.918 } 00:13:51.918 ] 00:13:51.918 }' 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.918 13:30:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.490 "name": "raid_bdev1", 00:13:52.490 "uuid": "6d9b0ceb-b157-465f-b177-b081adacb749", 00:13:52.490 "strip_size_kb": 0, 00:13:52.490 "state": "online", 00:13:52.490 "raid_level": "raid1", 00:13:52.490 "superblock": true, 00:13:52.490 "num_base_bdevs": 2, 00:13:52.490 "num_base_bdevs_discovered": 1, 00:13:52.490 "num_base_bdevs_operational": 1, 00:13:52.490 "base_bdevs_list": [ 00:13:52.490 { 00:13:52.490 "name": null, 00:13:52.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.490 "is_configured": false, 00:13:52.490 "data_offset": 0, 00:13:52.490 "data_size": 63488 00:13:52.490 }, 00:13:52.490 { 00:13:52.490 "name": "BaseBdev2", 00:13:52.490 "uuid": "f5c457fd-7ffc-58b0-933b-db2c4257c4ed", 00:13:52.490 "is_configured": true, 00:13:52.490 "data_offset": 2048, 00:13:52.490 "data_size": 63488 00:13:52.490 } 00:13:52.490 ] 00:13:52.490 }' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76837 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76837 ']' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76837 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76837 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76837' 00:13:52.490 killing process with pid 76837 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76837 00:13:52.490 Received shutdown signal, test time was about 16.871832 seconds 00:13:52.490 00:13:52.490 Latency(us) 00:13:52.490 [2024-11-18T13:30:22.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.490 [2024-11-18T13:30:22.544Z] =================================================================================================================== 00:13:52.490 [2024-11-18T13:30:22.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:52.490 [2024-11-18 13:30:22.485320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.490 [2024-11-18 13:30:22.485445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.490 [2024-11-18 13:30:22.485494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.490 13:30:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76837 00:13:52.490 [2024-11-18 13:30:22.485504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:52.749 [2024-11-18 13:30:22.704210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:54.128 00:13:54.128 real 0m19.945s 00:13:54.128 user 0m26.006s 00:13:54.128 sys 0m2.262s 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.128 ************************************ 00:13:54.128 END TEST raid_rebuild_test_sb_io 00:13:54.128 ************************************ 00:13:54.128 13:30:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:54.128 13:30:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:54.128 13:30:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:54.128 13:30:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.128 13:30:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.128 ************************************ 00:13:54.128 START TEST raid_rebuild_test 00:13:54.128 ************************************ 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77522 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77522 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77522 ']' 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.128 13:30:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.128 [2024-11-18 13:30:23.968259] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:54.128 [2024-11-18 13:30:23.968447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:54.128 Zero copy mechanism will not be used. 00:13:54.128 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77522 ] 00:13:54.128 [2024-11-18 13:30:24.138889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.387 [2024-11-18 13:30:24.244346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.387 [2024-11-18 13:30:24.427227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.387 [2024-11-18 13:30:24.427332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 BaseBdev1_malloc 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 [2024-11-18 13:30:24.835303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:54.956 [2024-11-18 13:30:24.835377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.956 [2024-11-18 13:30:24.835402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:54.956 [2024-11-18 13:30:24.835413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.956 [2024-11-18 13:30:24.837361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.956 [2024-11-18 13:30:24.837400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.956 BaseBdev1 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 BaseBdev2_malloc 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.956 [2024-11-18 13:30:24.887816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:54.956 [2024-11-18 13:30:24.887959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.956 [2024-11-18 13:30:24.887981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:54.956 [2024-11-18 13:30:24.887991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.956 [2024-11-18 13:30:24.889899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.956 [2024-11-18 13:30:24.889940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.956 BaseBdev2 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.956 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.957 BaseBdev3_malloc 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.957 [2024-11-18 13:30:24.969853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:54.957 [2024-11-18 13:30:24.969905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.957 [2024-11-18 13:30:24.969925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:54.957 [2024-11-18 13:30:24.969936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.957 [2024-11-18 13:30:24.971831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.957 [2024-11-18 13:30:24.971876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:54.957 BaseBdev3 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.957 13:30:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 BaseBdev4_malloc 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 [2024-11-18 13:30:25.023012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:55.217 [2024-11-18 13:30:25.023067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.217 [2024-11-18 13:30:25.023084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:55.217 [2024-11-18 13:30:25.023095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.217 [2024-11-18 13:30:25.025011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.217 [2024-11-18 13:30:25.025141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:55.217 BaseBdev4 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 spare_malloc 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 spare_delay 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 [2024-11-18 13:30:25.091440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:55.217 [2024-11-18 13:30:25.091499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.217 [2024-11-18 13:30:25.091518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:55.217 [2024-11-18 13:30:25.091529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.217 [2024-11-18 13:30:25.093479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.217 [2024-11-18 13:30:25.093520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:55.217 spare 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.217 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.217 [2024-11-18 13:30:25.103461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.217 [2024-11-18 13:30:25.105117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.217 [2024-11-18 13:30:25.105199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.217 [2024-11-18 13:30:25.105248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.217 [2024-11-18 13:30:25.105318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:55.217 [2024-11-18 13:30:25.105330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:55.217 [2024-11-18 13:30:25.105548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:55.218 [2024-11-18 13:30:25.105696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:55.218 [2024-11-18 13:30:25.105707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:55.218 [2024-11-18 13:30:25.105836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.218 "name": "raid_bdev1", 00:13:55.218 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:13:55.218 "strip_size_kb": 0, 00:13:55.218 "state": "online", 00:13:55.218 "raid_level": "raid1", 00:13:55.218 "superblock": false, 00:13:55.218 "num_base_bdevs": 4, 00:13:55.218 "num_base_bdevs_discovered": 4, 00:13:55.218 "num_base_bdevs_operational": 4, 00:13:55.218 "base_bdevs_list": [ 00:13:55.218 { 00:13:55.218 "name": "BaseBdev1", 00:13:55.218 "uuid": "4b218fd4-719a-5052-a9d1-483218c034b2", 00:13:55.218 "is_configured": true, 00:13:55.218 "data_offset": 0, 00:13:55.218 "data_size": 65536 00:13:55.218 }, 00:13:55.218 { 00:13:55.218 "name": "BaseBdev2", 00:13:55.218 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:13:55.218 "is_configured": true, 00:13:55.218 "data_offset": 0, 00:13:55.218 "data_size": 65536 00:13:55.218 }, 00:13:55.218 { 00:13:55.218 "name": "BaseBdev3", 00:13:55.218 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:13:55.218 "is_configured": true, 00:13:55.218 "data_offset": 0, 00:13:55.218 "data_size": 65536 00:13:55.218 }, 00:13:55.218 { 00:13:55.218 "name": "BaseBdev4", 00:13:55.218 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:13:55.218 "is_configured": true, 00:13:55.218 "data_offset": 0, 00:13:55.218 "data_size": 65536 00:13:55.218 } 00:13:55.218 ] 00:13:55.218 }' 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.218 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 [2024-11-18 13:30:25.563032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.788 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:55.788 [2024-11-18 13:30:25.814290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:55.788 /dev/nbd0 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.047 1+0 records in 00:13:56.047 1+0 records out 00:13:56.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002892 s, 14.2 MB/s 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:56.047 13:30:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:01.323 65536+0 records in 00:14:01.323 65536+0 records out 00:14:01.323 33554432 bytes (34 MB, 32 MiB) copied, 5.23985 s, 6.4 MB/s 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.323 [2024-11-18 13:30:31.296051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.323 [2024-11-18 13:30:31.328059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.323 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.324 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.584 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.584 "name": "raid_bdev1", 00:14:01.584 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:01.584 "strip_size_kb": 0, 00:14:01.584 "state": "online", 00:14:01.584 "raid_level": "raid1", 00:14:01.584 "superblock": false, 00:14:01.584 "num_base_bdevs": 4, 00:14:01.584 "num_base_bdevs_discovered": 3, 00:14:01.584 "num_base_bdevs_operational": 3, 00:14:01.584 "base_bdevs_list": [ 00:14:01.584 { 00:14:01.584 "name": null, 00:14:01.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.584 "is_configured": false, 00:14:01.584 "data_offset": 0, 00:14:01.584 "data_size": 65536 00:14:01.584 }, 00:14:01.584 { 00:14:01.584 "name": "BaseBdev2", 00:14:01.584 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:14:01.584 "is_configured": true, 00:14:01.584 "data_offset": 0, 00:14:01.584 "data_size": 65536 00:14:01.584 }, 00:14:01.584 { 00:14:01.584 "name": "BaseBdev3", 00:14:01.584 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:01.584 "is_configured": true, 00:14:01.584 "data_offset": 0, 00:14:01.584 "data_size": 65536 00:14:01.584 }, 00:14:01.584 { 00:14:01.584 "name": "BaseBdev4", 00:14:01.584 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:01.584 "is_configured": true, 00:14:01.584 "data_offset": 0, 00:14:01.584 "data_size": 65536 00:14:01.584 } 00:14:01.584 ] 00:14:01.584 }' 00:14:01.584 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.584 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.843 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.843 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.843 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.843 [2024-11-18 13:30:31.711378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.843 [2024-11-18 13:30:31.726743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:01.843 13:30:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.843 13:30:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.843 [2024-11-18 13:30:31.728484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.781 "name": "raid_bdev1", 00:14:02.781 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:02.781 "strip_size_kb": 0, 00:14:02.781 "state": "online", 00:14:02.781 "raid_level": "raid1", 00:14:02.781 "superblock": false, 00:14:02.781 "num_base_bdevs": 4, 00:14:02.781 "num_base_bdevs_discovered": 4, 00:14:02.781 "num_base_bdevs_operational": 4, 00:14:02.781 "process": { 00:14:02.781 "type": "rebuild", 00:14:02.781 "target": "spare", 00:14:02.781 "progress": { 00:14:02.781 "blocks": 20480, 00:14:02.781 "percent": 31 00:14:02.781 } 00:14:02.781 }, 00:14:02.781 "base_bdevs_list": [ 00:14:02.781 { 00:14:02.781 "name": "spare", 00:14:02.781 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:02.781 "is_configured": true, 00:14:02.781 "data_offset": 0, 00:14:02.781 "data_size": 65536 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "name": "BaseBdev2", 00:14:02.781 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:14:02.781 "is_configured": true, 00:14:02.781 "data_offset": 0, 00:14:02.781 "data_size": 65536 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "name": "BaseBdev3", 00:14:02.781 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:02.781 "is_configured": true, 00:14:02.781 "data_offset": 0, 00:14:02.781 "data_size": 65536 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "name": "BaseBdev4", 00:14:02.781 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:02.781 "is_configured": true, 00:14:02.781 "data_offset": 0, 00:14:02.781 "data_size": 65536 00:14:02.781 } 00:14:02.781 ] 00:14:02.781 }' 00:14:02.781 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 [2024-11-18 13:30:32.880191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.041 [2024-11-18 13:30:32.933055] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:03.041 [2024-11-18 13:30:32.933171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.041 [2024-11-18 13:30:32.933208] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.041 [2024-11-18 13:30:32.933230] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.041 "name": "raid_bdev1", 00:14:03.041 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:03.041 "strip_size_kb": 0, 00:14:03.041 "state": "online", 00:14:03.041 "raid_level": "raid1", 00:14:03.041 "superblock": false, 00:14:03.041 "num_base_bdevs": 4, 00:14:03.041 "num_base_bdevs_discovered": 3, 00:14:03.041 "num_base_bdevs_operational": 3, 00:14:03.041 "base_bdevs_list": [ 00:14:03.041 { 00:14:03.041 "name": null, 00:14:03.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.041 "is_configured": false, 00:14:03.041 "data_offset": 0, 00:14:03.041 "data_size": 65536 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev2", 00:14:03.041 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 0, 00:14:03.041 "data_size": 65536 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev3", 00:14:03.041 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 0, 00:14:03.041 "data_size": 65536 00:14:03.041 }, 00:14:03.041 { 00:14:03.041 "name": "BaseBdev4", 00:14:03.041 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:03.041 "is_configured": true, 00:14:03.041 "data_offset": 0, 00:14:03.041 "data_size": 65536 00:14:03.041 } 00:14:03.041 ] 00:14:03.041 }' 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.041 13:30:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.301 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.560 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.560 "name": "raid_bdev1", 00:14:03.560 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:03.560 "strip_size_kb": 0, 00:14:03.561 "state": "online", 00:14:03.561 "raid_level": "raid1", 00:14:03.561 "superblock": false, 00:14:03.561 "num_base_bdevs": 4, 00:14:03.561 "num_base_bdevs_discovered": 3, 00:14:03.561 "num_base_bdevs_operational": 3, 00:14:03.561 "base_bdevs_list": [ 00:14:03.561 { 00:14:03.561 "name": null, 00:14:03.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.561 "is_configured": false, 00:14:03.561 "data_offset": 0, 00:14:03.561 "data_size": 65536 00:14:03.561 }, 00:14:03.561 { 00:14:03.561 "name": "BaseBdev2", 00:14:03.561 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:14:03.561 "is_configured": true, 00:14:03.561 "data_offset": 0, 00:14:03.561 "data_size": 65536 00:14:03.561 }, 00:14:03.561 { 00:14:03.561 "name": "BaseBdev3", 00:14:03.561 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:03.561 "is_configured": true, 00:14:03.561 "data_offset": 0, 00:14:03.561 "data_size": 65536 00:14:03.561 }, 00:14:03.561 { 00:14:03.561 "name": "BaseBdev4", 00:14:03.561 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:03.561 "is_configured": true, 00:14:03.561 "data_offset": 0, 00:14:03.561 "data_size": 65536 00:14:03.561 } 00:14:03.561 ] 00:14:03.561 }' 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.561 [2024-11-18 13:30:33.480027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.561 [2024-11-18 13:30:33.493376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.561 13:30:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.561 [2024-11-18 13:30:33.495088] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.499 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.759 "name": "raid_bdev1", 00:14:04.759 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:04.759 "strip_size_kb": 0, 00:14:04.759 "state": "online", 00:14:04.759 "raid_level": "raid1", 00:14:04.759 "superblock": false, 00:14:04.759 "num_base_bdevs": 4, 00:14:04.759 "num_base_bdevs_discovered": 4, 00:14:04.759 "num_base_bdevs_operational": 4, 00:14:04.759 "process": { 00:14:04.759 "type": "rebuild", 00:14:04.759 "target": "spare", 00:14:04.759 "progress": { 00:14:04.759 "blocks": 20480, 00:14:04.759 "percent": 31 00:14:04.759 } 00:14:04.759 }, 00:14:04.759 "base_bdevs_list": [ 00:14:04.759 { 00:14:04.759 "name": "spare", 00:14:04.759 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:04.759 "is_configured": true, 00:14:04.759 "data_offset": 0, 00:14:04.759 "data_size": 65536 00:14:04.759 }, 00:14:04.759 { 00:14:04.759 "name": "BaseBdev2", 00:14:04.759 "uuid": "721f1000-7760-5623-afe7-1cec3c35e90f", 00:14:04.759 "is_configured": true, 00:14:04.759 "data_offset": 0, 00:14:04.759 "data_size": 65536 00:14:04.759 }, 00:14:04.759 { 00:14:04.759 "name": "BaseBdev3", 00:14:04.759 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:04.759 "is_configured": true, 00:14:04.759 "data_offset": 0, 00:14:04.759 "data_size": 65536 00:14:04.759 }, 00:14:04.759 { 00:14:04.759 "name": "BaseBdev4", 00:14:04.759 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:04.759 "is_configured": true, 00:14:04.759 "data_offset": 0, 00:14:04.759 "data_size": 65536 00:14:04.759 } 00:14:04.759 ] 00:14:04.759 }' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.759 [2024-11-18 13:30:34.642741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.759 [2024-11-18 13:30:34.699597] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.759 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.760 "name": "raid_bdev1", 00:14:04.760 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:04.760 "strip_size_kb": 0, 00:14:04.760 "state": "online", 00:14:04.760 "raid_level": "raid1", 00:14:04.760 "superblock": false, 00:14:04.760 "num_base_bdevs": 4, 00:14:04.760 "num_base_bdevs_discovered": 3, 00:14:04.760 "num_base_bdevs_operational": 3, 00:14:04.760 "process": { 00:14:04.760 "type": "rebuild", 00:14:04.760 "target": "spare", 00:14:04.760 "progress": { 00:14:04.760 "blocks": 24576, 00:14:04.760 "percent": 37 00:14:04.760 } 00:14:04.760 }, 00:14:04.760 "base_bdevs_list": [ 00:14:04.760 { 00:14:04.760 "name": "spare", 00:14:04.760 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:04.760 "is_configured": true, 00:14:04.760 "data_offset": 0, 00:14:04.760 "data_size": 65536 00:14:04.760 }, 00:14:04.760 { 00:14:04.760 "name": null, 00:14:04.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.760 "is_configured": false, 00:14:04.760 "data_offset": 0, 00:14:04.760 "data_size": 65536 00:14:04.760 }, 00:14:04.760 { 00:14:04.760 "name": "BaseBdev3", 00:14:04.760 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:04.760 "is_configured": true, 00:14:04.760 "data_offset": 0, 00:14:04.760 "data_size": 65536 00:14:04.760 }, 00:14:04.760 { 00:14:04.760 "name": "BaseBdev4", 00:14:04.760 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:04.760 "is_configured": true, 00:14:04.760 "data_offset": 0, 00:14:04.760 "data_size": 65536 00:14:04.760 } 00:14:04.760 ] 00:14:04.760 }' 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.760 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.020 "name": "raid_bdev1", 00:14:05.020 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:05.020 "strip_size_kb": 0, 00:14:05.020 "state": "online", 00:14:05.020 "raid_level": "raid1", 00:14:05.020 "superblock": false, 00:14:05.020 "num_base_bdevs": 4, 00:14:05.020 "num_base_bdevs_discovered": 3, 00:14:05.020 "num_base_bdevs_operational": 3, 00:14:05.020 "process": { 00:14:05.020 "type": "rebuild", 00:14:05.020 "target": "spare", 00:14:05.020 "progress": { 00:14:05.020 "blocks": 26624, 00:14:05.020 "percent": 40 00:14:05.020 } 00:14:05.020 }, 00:14:05.020 "base_bdevs_list": [ 00:14:05.020 { 00:14:05.020 "name": "spare", 00:14:05.020 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:05.020 "is_configured": true, 00:14:05.020 "data_offset": 0, 00:14:05.020 "data_size": 65536 00:14:05.020 }, 00:14:05.020 { 00:14:05.020 "name": null, 00:14:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.020 "is_configured": false, 00:14:05.020 "data_offset": 0, 00:14:05.020 "data_size": 65536 00:14:05.020 }, 00:14:05.020 { 00:14:05.020 "name": "BaseBdev3", 00:14:05.020 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:05.020 "is_configured": true, 00:14:05.020 "data_offset": 0, 00:14:05.020 "data_size": 65536 00:14:05.020 }, 00:14:05.020 { 00:14:05.020 "name": "BaseBdev4", 00:14:05.020 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:05.020 "is_configured": true, 00:14:05.020 "data_offset": 0, 00:14:05.020 "data_size": 65536 00:14:05.020 } 00:14:05.020 ] 00:14:05.020 }' 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.020 13:30:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.961 13:30:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.961 13:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.231 13:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.231 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.231 "name": "raid_bdev1", 00:14:06.231 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:06.231 "strip_size_kb": 0, 00:14:06.231 "state": "online", 00:14:06.231 "raid_level": "raid1", 00:14:06.231 "superblock": false, 00:14:06.231 "num_base_bdevs": 4, 00:14:06.231 "num_base_bdevs_discovered": 3, 00:14:06.231 "num_base_bdevs_operational": 3, 00:14:06.231 "process": { 00:14:06.231 "type": "rebuild", 00:14:06.231 "target": "spare", 00:14:06.231 "progress": { 00:14:06.231 "blocks": 51200, 00:14:06.231 "percent": 78 00:14:06.231 } 00:14:06.231 }, 00:14:06.231 "base_bdevs_list": [ 00:14:06.231 { 00:14:06.231 "name": "spare", 00:14:06.231 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:06.231 "is_configured": true, 00:14:06.231 "data_offset": 0, 00:14:06.231 "data_size": 65536 00:14:06.231 }, 00:14:06.231 { 00:14:06.231 "name": null, 00:14:06.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.231 "is_configured": false, 00:14:06.231 "data_offset": 0, 00:14:06.231 "data_size": 65536 00:14:06.231 }, 00:14:06.231 { 00:14:06.231 "name": "BaseBdev3", 00:14:06.231 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:06.231 "is_configured": true, 00:14:06.231 "data_offset": 0, 00:14:06.231 "data_size": 65536 00:14:06.231 }, 00:14:06.231 { 00:14:06.231 "name": "BaseBdev4", 00:14:06.231 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:06.231 "is_configured": true, 00:14:06.231 "data_offset": 0, 00:14:06.231 "data_size": 65536 00:14:06.231 } 00:14:06.231 ] 00:14:06.231 }' 00:14:06.231 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.231 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.232 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.232 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.232 13:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.813 [2024-11-18 13:30:36.707593] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:06.813 [2024-11-18 13:30:36.707726] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:06.813 [2024-11-18 13:30:36.707781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.381 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.381 "name": "raid_bdev1", 00:14:07.381 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:07.381 "strip_size_kb": 0, 00:14:07.381 "state": "online", 00:14:07.381 "raid_level": "raid1", 00:14:07.381 "superblock": false, 00:14:07.381 "num_base_bdevs": 4, 00:14:07.381 "num_base_bdevs_discovered": 3, 00:14:07.381 "num_base_bdevs_operational": 3, 00:14:07.381 "base_bdevs_list": [ 00:14:07.381 { 00:14:07.381 "name": "spare", 00:14:07.382 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": null, 00:14:07.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.382 "is_configured": false, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev3", 00:14:07.382 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev4", 00:14:07.382 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 } 00:14:07.382 ] 00:14:07.382 }' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.382 "name": "raid_bdev1", 00:14:07.382 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:07.382 "strip_size_kb": 0, 00:14:07.382 "state": "online", 00:14:07.382 "raid_level": "raid1", 00:14:07.382 "superblock": false, 00:14:07.382 "num_base_bdevs": 4, 00:14:07.382 "num_base_bdevs_discovered": 3, 00:14:07.382 "num_base_bdevs_operational": 3, 00:14:07.382 "base_bdevs_list": [ 00:14:07.382 { 00:14:07.382 "name": "spare", 00:14:07.382 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": null, 00:14:07.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.382 "is_configured": false, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev3", 00:14:07.382 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev4", 00:14:07.382 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 } 00:14:07.382 ] 00:14:07.382 }' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.382 "name": "raid_bdev1", 00:14:07.382 "uuid": "2c068dd6-fcee-4868-bbc7-7c0683700ec8", 00:14:07.382 "strip_size_kb": 0, 00:14:07.382 "state": "online", 00:14:07.382 "raid_level": "raid1", 00:14:07.382 "superblock": false, 00:14:07.382 "num_base_bdevs": 4, 00:14:07.382 "num_base_bdevs_discovered": 3, 00:14:07.382 "num_base_bdevs_operational": 3, 00:14:07.382 "base_bdevs_list": [ 00:14:07.382 { 00:14:07.382 "name": "spare", 00:14:07.382 "uuid": "8974f9b5-8ebe-5028-be44-44b10329b8a1", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": null, 00:14:07.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.382 "is_configured": false, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev3", 00:14:07.382 "uuid": "1b2066e5-e69a-53cf-9e77-c808cd669bb8", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 }, 00:14:07.382 { 00:14:07.382 "name": "BaseBdev4", 00:14:07.382 "uuid": "fa908aa3-c704-54dd-911f-68b2c01172cd", 00:14:07.382 "is_configured": true, 00:14:07.382 "data_offset": 0, 00:14:07.382 "data_size": 65536 00:14:07.382 } 00:14:07.382 ] 00:14:07.382 }' 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.382 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.949 [2024-11-18 13:30:37.742877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.949 [2024-11-18 13:30:37.742956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.949 [2024-11-18 13:30:37.743051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.949 [2024-11-18 13:30:37.743156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.949 [2024-11-18 13:30:37.743238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.949 13:30:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:08.209 /dev/nbd0 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.209 1+0 records in 00:14:08.209 1+0 records out 00:14:08.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372876 s, 11.0 MB/s 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.209 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:08.209 /dev/nbd1 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.469 1+0 records in 00:14:08.469 1+0 records out 00:14:08.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474084 s, 8.6 MB/s 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.469 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.728 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77522 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77522 ']' 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77522 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77522 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77522' 00:14:08.986 killing process with pid 77522 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77522 00:14:08.986 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.986 00:14:08.986 Latency(us) 00:14:08.986 [2024-11-18T13:30:39.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.986 [2024-11-18T13:30:39.040Z] =================================================================================================================== 00:14:08.986 [2024-11-18T13:30:39.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.986 [2024-11-18 13:30:38.946508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.986 13:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77522 00:14:09.555 [2024-11-18 13:30:39.405778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.497 00:14:10.497 real 0m16.569s 00:14:10.497 user 0m18.374s 00:14:10.497 sys 0m2.966s 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.497 ************************************ 00:14:10.497 END TEST raid_rebuild_test 00:14:10.497 ************************************ 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.497 13:30:40 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:10.497 13:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:10.497 13:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.497 13:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.497 ************************************ 00:14:10.497 START TEST raid_rebuild_test_sb 00:14:10.497 ************************************ 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77957 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77957 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77957 ']' 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.497 13:30:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.758 [2024-11-18 13:30:40.610362] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:10.758 [2024-11-18 13:30:40.610517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.758 Zero copy mechanism will not be used. 00:14:10.758 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77957 ] 00:14:10.758 [2024-11-18 13:30:40.781254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.017 [2024-11-18 13:30:40.885071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.277 [2024-11-18 13:30:41.072153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.277 [2024-11-18 13:30:41.072282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.536 BaseBdev1_malloc 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.536 [2024-11-18 13:30:41.484817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.536 [2024-11-18 13:30:41.484967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.536 [2024-11-18 13:30:41.485007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.536 [2024-11-18 13:30:41.485041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.536 [2024-11-18 13:30:41.487034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.536 [2024-11-18 13:30:41.487112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.536 BaseBdev1 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.536 BaseBdev2_malloc 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.536 [2024-11-18 13:30:41.537975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.536 [2024-11-18 13:30:41.538032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.536 [2024-11-18 13:30:41.538049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.536 [2024-11-18 13:30:41.538062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.536 [2024-11-18 13:30:41.540100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.536 BaseBdev2 00:14:11.536 [2024-11-18 13:30:41.540205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.536 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.537 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.537 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.537 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.537 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 BaseBdev3_malloc 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 [2024-11-18 13:30:41.626146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.796 [2024-11-18 13:30:41.626251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.796 [2024-11-18 13:30:41.626286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.796 [2024-11-18 13:30:41.626340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.796 [2024-11-18 13:30:41.628355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.796 [2024-11-18 13:30:41.628428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.796 BaseBdev3 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 BaseBdev4_malloc 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 [2024-11-18 13:30:41.679869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:11.796 [2024-11-18 13:30:41.679977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.796 [2024-11-18 13:30:41.680012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:11.796 [2024-11-18 13:30:41.680043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.796 [2024-11-18 13:30:41.681981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.796 [2024-11-18 13:30:41.682057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:11.796 BaseBdev4 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 spare_malloc 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 spare_delay 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 [2024-11-18 13:30:41.744798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.796 [2024-11-18 13:30:41.744923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.796 [2024-11-18 13:30:41.744966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:11.796 [2024-11-18 13:30:41.745003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.796 [2024-11-18 13:30:41.747231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.796 [2024-11-18 13:30:41.747305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.796 spare 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 [2024-11-18 13:30:41.756836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.796 [2024-11-18 13:30:41.758786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.796 [2024-11-18 13:30:41.758891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.796 [2024-11-18 13:30:41.758962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.796 [2024-11-18 13:30:41.759203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.796 [2024-11-18 13:30:41.759255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.796 [2024-11-18 13:30:41.759499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:11.796 [2024-11-18 13:30:41.759703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.796 [2024-11-18 13:30:41.759744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:11.796 [2024-11-18 13:30:41.759927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.796 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.796 "name": "raid_bdev1", 00:14:11.796 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:11.796 "strip_size_kb": 0, 00:14:11.796 "state": "online", 00:14:11.796 "raid_level": "raid1", 00:14:11.796 "superblock": true, 00:14:11.796 "num_base_bdevs": 4, 00:14:11.796 "num_base_bdevs_discovered": 4, 00:14:11.796 "num_base_bdevs_operational": 4, 00:14:11.796 "base_bdevs_list": [ 00:14:11.796 { 00:14:11.796 "name": "BaseBdev1", 00:14:11.796 "uuid": "d003f92c-db4c-54bf-9432-c9e0f9289d4c", 00:14:11.796 "is_configured": true, 00:14:11.796 "data_offset": 2048, 00:14:11.796 "data_size": 63488 00:14:11.796 }, 00:14:11.796 { 00:14:11.796 "name": "BaseBdev2", 00:14:11.796 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:11.796 "is_configured": true, 00:14:11.796 "data_offset": 2048, 00:14:11.796 "data_size": 63488 00:14:11.796 }, 00:14:11.796 { 00:14:11.796 "name": "BaseBdev3", 00:14:11.796 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:11.796 "is_configured": true, 00:14:11.796 "data_offset": 2048, 00:14:11.796 "data_size": 63488 00:14:11.796 }, 00:14:11.796 { 00:14:11.796 "name": "BaseBdev4", 00:14:11.796 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:11.796 "is_configured": true, 00:14:11.796 "data_offset": 2048, 00:14:11.797 "data_size": 63488 00:14:11.797 } 00:14:11.797 ] 00:14:11.797 }' 00:14:11.797 13:30:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.797 13:30:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.365 [2024-11-18 13:30:42.220316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.365 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.366 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:12.366 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.366 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.366 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:12.625 [2024-11-18 13:30:42.495557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:12.625 /dev/nbd0 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.625 1+0 records in 00:14:12.625 1+0 records out 00:14:12.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405995 s, 10.1 MB/s 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:12.625 13:30:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:17.903 63488+0 records in 00:14:17.903 63488+0 records out 00:14:17.903 32505856 bytes (33 MB, 31 MiB) copied, 4.969 s, 6.5 MB/s 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:17.903 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:17.903 [2024-11-18 13:30:47.724964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.904 [2024-11-18 13:30:47.756975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.904 "name": "raid_bdev1", 00:14:17.904 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:17.904 "strip_size_kb": 0, 00:14:17.904 "state": "online", 00:14:17.904 "raid_level": "raid1", 00:14:17.904 "superblock": true, 00:14:17.904 "num_base_bdevs": 4, 00:14:17.904 "num_base_bdevs_discovered": 3, 00:14:17.904 "num_base_bdevs_operational": 3, 00:14:17.904 "base_bdevs_list": [ 00:14:17.904 { 00:14:17.904 "name": null, 00:14:17.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.904 "is_configured": false, 00:14:17.904 "data_offset": 0, 00:14:17.904 "data_size": 63488 00:14:17.904 }, 00:14:17.904 { 00:14:17.904 "name": "BaseBdev2", 00:14:17.904 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:17.904 "is_configured": true, 00:14:17.904 "data_offset": 2048, 00:14:17.904 "data_size": 63488 00:14:17.904 }, 00:14:17.904 { 00:14:17.904 "name": "BaseBdev3", 00:14:17.904 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:17.904 "is_configured": true, 00:14:17.904 "data_offset": 2048, 00:14:17.904 "data_size": 63488 00:14:17.904 }, 00:14:17.904 { 00:14:17.904 "name": "BaseBdev4", 00:14:17.904 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:17.904 "is_configured": true, 00:14:17.904 "data_offset": 2048, 00:14:17.904 "data_size": 63488 00:14:17.904 } 00:14:17.904 ] 00:14:17.904 }' 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.904 13:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.163 13:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:18.164 13:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.164 13:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.164 [2024-11-18 13:30:48.156284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.164 [2024-11-18 13:30:48.171860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:18.164 13:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.164 13:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:18.164 [2024-11-18 13:30:48.173638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.544 "name": "raid_bdev1", 00:14:19.544 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:19.544 "strip_size_kb": 0, 00:14:19.544 "state": "online", 00:14:19.544 "raid_level": "raid1", 00:14:19.544 "superblock": true, 00:14:19.544 "num_base_bdevs": 4, 00:14:19.544 "num_base_bdevs_discovered": 4, 00:14:19.544 "num_base_bdevs_operational": 4, 00:14:19.544 "process": { 00:14:19.544 "type": "rebuild", 00:14:19.544 "target": "spare", 00:14:19.544 "progress": { 00:14:19.544 "blocks": 20480, 00:14:19.544 "percent": 32 00:14:19.544 } 00:14:19.544 }, 00:14:19.544 "base_bdevs_list": [ 00:14:19.544 { 00:14:19.544 "name": "spare", 00:14:19.544 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev2", 00:14:19.544 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev3", 00:14:19.544 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev4", 00:14:19.544 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 } 00:14:19.544 ] 00:14:19.544 }' 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.544 [2024-11-18 13:30:49.336845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.544 [2024-11-18 13:30:49.378459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.544 [2024-11-18 13:30:49.378567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.544 [2024-11-18 13:30:49.378604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.544 [2024-11-18 13:30:49.378626] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.544 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.544 "name": "raid_bdev1", 00:14:19.544 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:19.544 "strip_size_kb": 0, 00:14:19.544 "state": "online", 00:14:19.544 "raid_level": "raid1", 00:14:19.544 "superblock": true, 00:14:19.544 "num_base_bdevs": 4, 00:14:19.544 "num_base_bdevs_discovered": 3, 00:14:19.544 "num_base_bdevs_operational": 3, 00:14:19.544 "base_bdevs_list": [ 00:14:19.544 { 00:14:19.544 "name": null, 00:14:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.544 "is_configured": false, 00:14:19.544 "data_offset": 0, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev2", 00:14:19.544 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev3", 00:14:19.544 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev4", 00:14:19.544 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.545 "data_size": 63488 00:14:19.545 } 00:14:19.545 ] 00:14:19.545 }' 00:14:19.545 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.545 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.803 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.803 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.804 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.063 "name": "raid_bdev1", 00:14:20.063 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:20.063 "strip_size_kb": 0, 00:14:20.063 "state": "online", 00:14:20.063 "raid_level": "raid1", 00:14:20.063 "superblock": true, 00:14:20.063 "num_base_bdevs": 4, 00:14:20.063 "num_base_bdevs_discovered": 3, 00:14:20.063 "num_base_bdevs_operational": 3, 00:14:20.063 "base_bdevs_list": [ 00:14:20.063 { 00:14:20.063 "name": null, 00:14:20.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.063 "is_configured": false, 00:14:20.063 "data_offset": 0, 00:14:20.063 "data_size": 63488 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev2", 00:14:20.063 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:20.063 "is_configured": true, 00:14:20.063 "data_offset": 2048, 00:14:20.063 "data_size": 63488 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev3", 00:14:20.063 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:20.063 "is_configured": true, 00:14:20.063 "data_offset": 2048, 00:14:20.063 "data_size": 63488 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev4", 00:14:20.063 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:20.063 "is_configured": true, 00:14:20.063 "data_offset": 2048, 00:14:20.063 "data_size": 63488 00:14:20.063 } 00:14:20.063 ] 00:14:20.063 }' 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.063 [2024-11-18 13:30:49.962506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:20.063 [2024-11-18 13:30:49.976556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.063 13:30:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:20.064 [2024-11-18 13:30:49.978401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.002 13:30:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.002 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.002 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.002 "name": "raid_bdev1", 00:14:21.002 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:21.002 "strip_size_kb": 0, 00:14:21.002 "state": "online", 00:14:21.002 "raid_level": "raid1", 00:14:21.002 "superblock": true, 00:14:21.002 "num_base_bdevs": 4, 00:14:21.002 "num_base_bdevs_discovered": 4, 00:14:21.002 "num_base_bdevs_operational": 4, 00:14:21.002 "process": { 00:14:21.002 "type": "rebuild", 00:14:21.002 "target": "spare", 00:14:21.002 "progress": { 00:14:21.002 "blocks": 20480, 00:14:21.002 "percent": 32 00:14:21.002 } 00:14:21.002 }, 00:14:21.002 "base_bdevs_list": [ 00:14:21.002 { 00:14:21.002 "name": "spare", 00:14:21.002 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:21.002 "is_configured": true, 00:14:21.002 "data_offset": 2048, 00:14:21.002 "data_size": 63488 00:14:21.002 }, 00:14:21.002 { 00:14:21.002 "name": "BaseBdev2", 00:14:21.002 "uuid": "b8bcfb0b-9bba-587f-bb41-9bf381dc9a2f", 00:14:21.002 "is_configured": true, 00:14:21.002 "data_offset": 2048, 00:14:21.002 "data_size": 63488 00:14:21.002 }, 00:14:21.002 { 00:14:21.002 "name": "BaseBdev3", 00:14:21.002 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:21.002 "is_configured": true, 00:14:21.002 "data_offset": 2048, 00:14:21.002 "data_size": 63488 00:14:21.002 }, 00:14:21.002 { 00:14:21.002 "name": "BaseBdev4", 00:14:21.002 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:21.002 "is_configured": true, 00:14:21.002 "data_offset": 2048, 00:14:21.002 "data_size": 63488 00:14:21.002 } 00:14:21.002 ] 00:14:21.002 }' 00:14:21.002 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:21.262 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.262 [2024-11-18 13:30:51.138397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:21.262 [2024-11-18 13:30:51.283215] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.262 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.522 "name": "raid_bdev1", 00:14:21.522 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:21.522 "strip_size_kb": 0, 00:14:21.522 "state": "online", 00:14:21.522 "raid_level": "raid1", 00:14:21.522 "superblock": true, 00:14:21.522 "num_base_bdevs": 4, 00:14:21.522 "num_base_bdevs_discovered": 3, 00:14:21.522 "num_base_bdevs_operational": 3, 00:14:21.522 "process": { 00:14:21.522 "type": "rebuild", 00:14:21.522 "target": "spare", 00:14:21.522 "progress": { 00:14:21.522 "blocks": 24576, 00:14:21.522 "percent": 38 00:14:21.522 } 00:14:21.522 }, 00:14:21.522 "base_bdevs_list": [ 00:14:21.522 { 00:14:21.522 "name": "spare", 00:14:21.522 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": null, 00:14:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.522 "is_configured": false, 00:14:21.522 "data_offset": 0, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": "BaseBdev3", 00:14:21.522 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": "BaseBdev4", 00:14:21.522 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 } 00:14:21.522 ] 00:14:21.522 }' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=465 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.522 "name": "raid_bdev1", 00:14:21.522 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:21.522 "strip_size_kb": 0, 00:14:21.522 "state": "online", 00:14:21.522 "raid_level": "raid1", 00:14:21.522 "superblock": true, 00:14:21.522 "num_base_bdevs": 4, 00:14:21.522 "num_base_bdevs_discovered": 3, 00:14:21.522 "num_base_bdevs_operational": 3, 00:14:21.522 "process": { 00:14:21.522 "type": "rebuild", 00:14:21.522 "target": "spare", 00:14:21.522 "progress": { 00:14:21.522 "blocks": 26624, 00:14:21.522 "percent": 41 00:14:21.522 } 00:14:21.522 }, 00:14:21.522 "base_bdevs_list": [ 00:14:21.522 { 00:14:21.522 "name": "spare", 00:14:21.522 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": null, 00:14:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.522 "is_configured": false, 00:14:21.522 "data_offset": 0, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": "BaseBdev3", 00:14:21.522 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 }, 00:14:21.522 { 00:14:21.522 "name": "BaseBdev4", 00:14:21.522 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:21.522 "is_configured": true, 00:14:21.522 "data_offset": 2048, 00:14:21.522 "data_size": 63488 00:14:21.522 } 00:14:21.522 ] 00:14:21.522 }' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.522 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.782 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.782 13:30:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.721 "name": "raid_bdev1", 00:14:22.721 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:22.721 "strip_size_kb": 0, 00:14:22.721 "state": "online", 00:14:22.721 "raid_level": "raid1", 00:14:22.721 "superblock": true, 00:14:22.721 "num_base_bdevs": 4, 00:14:22.721 "num_base_bdevs_discovered": 3, 00:14:22.721 "num_base_bdevs_operational": 3, 00:14:22.721 "process": { 00:14:22.721 "type": "rebuild", 00:14:22.721 "target": "spare", 00:14:22.721 "progress": { 00:14:22.721 "blocks": 51200, 00:14:22.721 "percent": 80 00:14:22.721 } 00:14:22.721 }, 00:14:22.721 "base_bdevs_list": [ 00:14:22.721 { 00:14:22.721 "name": "spare", 00:14:22.721 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:22.721 "is_configured": true, 00:14:22.721 "data_offset": 2048, 00:14:22.721 "data_size": 63488 00:14:22.721 }, 00:14:22.721 { 00:14:22.721 "name": null, 00:14:22.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.721 "is_configured": false, 00:14:22.721 "data_offset": 0, 00:14:22.721 "data_size": 63488 00:14:22.721 }, 00:14:22.721 { 00:14:22.721 "name": "BaseBdev3", 00:14:22.721 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:22.721 "is_configured": true, 00:14:22.721 "data_offset": 2048, 00:14:22.721 "data_size": 63488 00:14:22.721 }, 00:14:22.721 { 00:14:22.721 "name": "BaseBdev4", 00:14:22.721 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:22.721 "is_configured": true, 00:14:22.721 "data_offset": 2048, 00:14:22.721 "data_size": 63488 00:14:22.721 } 00:14:22.721 ] 00:14:22.721 }' 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.721 13:30:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.290 [2024-11-18 13:30:53.190556] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.290 [2024-11-18 13:30:53.190674] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.290 [2024-11-18 13:30:53.190843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.859 "name": "raid_bdev1", 00:14:23.859 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:23.859 "strip_size_kb": 0, 00:14:23.859 "state": "online", 00:14:23.859 "raid_level": "raid1", 00:14:23.859 "superblock": true, 00:14:23.859 "num_base_bdevs": 4, 00:14:23.859 "num_base_bdevs_discovered": 3, 00:14:23.859 "num_base_bdevs_operational": 3, 00:14:23.859 "base_bdevs_list": [ 00:14:23.859 { 00:14:23.859 "name": "spare", 00:14:23.859 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:23.859 "is_configured": true, 00:14:23.859 "data_offset": 2048, 00:14:23.859 "data_size": 63488 00:14:23.859 }, 00:14:23.859 { 00:14:23.859 "name": null, 00:14:23.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.859 "is_configured": false, 00:14:23.859 "data_offset": 0, 00:14:23.859 "data_size": 63488 00:14:23.859 }, 00:14:23.859 { 00:14:23.859 "name": "BaseBdev3", 00:14:23.859 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:23.859 "is_configured": true, 00:14:23.859 "data_offset": 2048, 00:14:23.859 "data_size": 63488 00:14:23.859 }, 00:14:23.859 { 00:14:23.859 "name": "BaseBdev4", 00:14:23.859 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:23.859 "is_configured": true, 00:14:23.859 "data_offset": 2048, 00:14:23.859 "data_size": 63488 00:14:23.859 } 00:14:23.859 ] 00:14:23.859 }' 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.859 13:30:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.119 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.119 "name": "raid_bdev1", 00:14:24.119 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:24.119 "strip_size_kb": 0, 00:14:24.119 "state": "online", 00:14:24.119 "raid_level": "raid1", 00:14:24.119 "superblock": true, 00:14:24.119 "num_base_bdevs": 4, 00:14:24.119 "num_base_bdevs_discovered": 3, 00:14:24.119 "num_base_bdevs_operational": 3, 00:14:24.119 "base_bdevs_list": [ 00:14:24.119 { 00:14:24.119 "name": "spare", 00:14:24.119 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": null, 00:14:24.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.119 "is_configured": false, 00:14:24.119 "data_offset": 0, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": "BaseBdev3", 00:14:24.119 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": "BaseBdev4", 00:14:24.119 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 } 00:14:24.119 ] 00:14:24.119 }' 00:14:24.119 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.119 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.119 13:30:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.119 "name": "raid_bdev1", 00:14:24.119 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:24.119 "strip_size_kb": 0, 00:14:24.119 "state": "online", 00:14:24.119 "raid_level": "raid1", 00:14:24.119 "superblock": true, 00:14:24.119 "num_base_bdevs": 4, 00:14:24.119 "num_base_bdevs_discovered": 3, 00:14:24.119 "num_base_bdevs_operational": 3, 00:14:24.119 "base_bdevs_list": [ 00:14:24.119 { 00:14:24.119 "name": "spare", 00:14:24.119 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": null, 00:14:24.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.119 "is_configured": false, 00:14:24.119 "data_offset": 0, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": "BaseBdev3", 00:14:24.119 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 }, 00:14:24.119 { 00:14:24.119 "name": "BaseBdev4", 00:14:24.119 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:24.119 "is_configured": true, 00:14:24.119 "data_offset": 2048, 00:14:24.119 "data_size": 63488 00:14:24.119 } 00:14:24.119 ] 00:14:24.119 }' 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.119 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.705 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.705 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 [2024-11-18 13:30:54.461280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.706 [2024-11-18 13:30:54.461372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.706 [2024-11-18 13:30:54.461479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.706 [2024-11-18 13:30:54.461573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.706 [2024-11-18 13:30:54.461645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.706 /dev/nbd0 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.706 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.966 1+0 records in 00:14:24.966 1+0 records out 00:14:24.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044234 s, 9.3 MB/s 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:24.966 /dev/nbd1 00:14:24.966 13:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.966 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.226 1+0 records in 00:14:25.226 1+0 records out 00:14:25.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448375 s, 9.1 MB/s 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.226 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.485 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.746 [2024-11-18 13:30:55.627259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.746 [2024-11-18 13:30:55.627350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.746 [2024-11-18 13:30:55.627388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:25.746 [2024-11-18 13:30:55.627416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.746 [2024-11-18 13:30:55.629769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.746 [2024-11-18 13:30:55.629849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.746 [2024-11-18 13:30:55.629994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:25.746 [2024-11-18 13:30:55.630084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.746 [2024-11-18 13:30:55.630310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.746 [2024-11-18 13:30:55.630462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.746 spare 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.746 [2024-11-18 13:30:55.730431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:25.746 [2024-11-18 13:30:55.730458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.746 [2024-11-18 13:30:55.730748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:25.746 [2024-11-18 13:30:55.730928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:25.746 [2024-11-18 13:30:55.730944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:25.746 [2024-11-18 13:30:55.731106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.746 "name": "raid_bdev1", 00:14:25.746 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:25.746 "strip_size_kb": 0, 00:14:25.746 "state": "online", 00:14:25.746 "raid_level": "raid1", 00:14:25.746 "superblock": true, 00:14:25.746 "num_base_bdevs": 4, 00:14:25.746 "num_base_bdevs_discovered": 3, 00:14:25.746 "num_base_bdevs_operational": 3, 00:14:25.746 "base_bdevs_list": [ 00:14:25.746 { 00:14:25.746 "name": "spare", 00:14:25.746 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:25.746 "is_configured": true, 00:14:25.746 "data_offset": 2048, 00:14:25.746 "data_size": 63488 00:14:25.746 }, 00:14:25.746 { 00:14:25.746 "name": null, 00:14:25.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.746 "is_configured": false, 00:14:25.746 "data_offset": 2048, 00:14:25.746 "data_size": 63488 00:14:25.746 }, 00:14:25.746 { 00:14:25.746 "name": "BaseBdev3", 00:14:25.746 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:25.746 "is_configured": true, 00:14:25.746 "data_offset": 2048, 00:14:25.746 "data_size": 63488 00:14:25.746 }, 00:14:25.746 { 00:14:25.746 "name": "BaseBdev4", 00:14:25.746 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:25.746 "is_configured": true, 00:14:25.746 "data_offset": 2048, 00:14:25.746 "data_size": 63488 00:14:25.746 } 00:14:25.746 ] 00:14:25.746 }' 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.746 13:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.316 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.316 "name": "raid_bdev1", 00:14:26.317 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:26.317 "strip_size_kb": 0, 00:14:26.317 "state": "online", 00:14:26.317 "raid_level": "raid1", 00:14:26.317 "superblock": true, 00:14:26.317 "num_base_bdevs": 4, 00:14:26.317 "num_base_bdevs_discovered": 3, 00:14:26.317 "num_base_bdevs_operational": 3, 00:14:26.317 "base_bdevs_list": [ 00:14:26.317 { 00:14:26.317 "name": "spare", 00:14:26.317 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:26.317 "is_configured": true, 00:14:26.317 "data_offset": 2048, 00:14:26.317 "data_size": 63488 00:14:26.317 }, 00:14:26.317 { 00:14:26.317 "name": null, 00:14:26.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.317 "is_configured": false, 00:14:26.317 "data_offset": 2048, 00:14:26.317 "data_size": 63488 00:14:26.317 }, 00:14:26.317 { 00:14:26.317 "name": "BaseBdev3", 00:14:26.317 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:26.317 "is_configured": true, 00:14:26.317 "data_offset": 2048, 00:14:26.317 "data_size": 63488 00:14:26.317 }, 00:14:26.317 { 00:14:26.317 "name": "BaseBdev4", 00:14:26.317 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:26.317 "is_configured": true, 00:14:26.317 "data_offset": 2048, 00:14:26.317 "data_size": 63488 00:14:26.317 } 00:14:26.317 ] 00:14:26.317 }' 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.317 [2024-11-18 13:30:56.362095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.317 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.576 "name": "raid_bdev1", 00:14:26.576 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:26.576 "strip_size_kb": 0, 00:14:26.576 "state": "online", 00:14:26.576 "raid_level": "raid1", 00:14:26.576 "superblock": true, 00:14:26.576 "num_base_bdevs": 4, 00:14:26.576 "num_base_bdevs_discovered": 2, 00:14:26.576 "num_base_bdevs_operational": 2, 00:14:26.576 "base_bdevs_list": [ 00:14:26.576 { 00:14:26.576 "name": null, 00:14:26.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.576 "is_configured": false, 00:14:26.576 "data_offset": 0, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": null, 00:14:26.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.576 "is_configured": false, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": "BaseBdev3", 00:14:26.576 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:26.576 "is_configured": true, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 }, 00:14:26.576 { 00:14:26.576 "name": "BaseBdev4", 00:14:26.576 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:26.576 "is_configured": true, 00:14:26.576 "data_offset": 2048, 00:14:26.576 "data_size": 63488 00:14:26.576 } 00:14:26.576 ] 00:14:26.576 }' 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.576 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.836 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.836 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.836 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.836 [2024-11-18 13:30:56.781405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.836 [2024-11-18 13:30:56.781584] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:26.836 [2024-11-18 13:30:56.781601] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.836 [2024-11-18 13:30:56.781639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.836 [2024-11-18 13:30:56.795174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:26.836 13:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.836 13:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:26.836 [2024-11-18 13:30:56.796901] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.775 13:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.035 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.035 "name": "raid_bdev1", 00:14:28.036 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:28.036 "strip_size_kb": 0, 00:14:28.036 "state": "online", 00:14:28.036 "raid_level": "raid1", 00:14:28.036 "superblock": true, 00:14:28.036 "num_base_bdevs": 4, 00:14:28.036 "num_base_bdevs_discovered": 3, 00:14:28.036 "num_base_bdevs_operational": 3, 00:14:28.036 "process": { 00:14:28.036 "type": "rebuild", 00:14:28.036 "target": "spare", 00:14:28.036 "progress": { 00:14:28.036 "blocks": 20480, 00:14:28.036 "percent": 32 00:14:28.036 } 00:14:28.036 }, 00:14:28.036 "base_bdevs_list": [ 00:14:28.036 { 00:14:28.036 "name": "spare", 00:14:28.036 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:28.036 "is_configured": true, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": null, 00:14:28.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.036 "is_configured": false, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": "BaseBdev3", 00:14:28.036 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:28.036 "is_configured": true, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": "BaseBdev4", 00:14:28.036 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:28.036 "is_configured": true, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 } 00:14:28.036 ] 00:14:28.036 }' 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.036 13:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.036 [2024-11-18 13:30:57.944652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.036 [2024-11-18 13:30:58.001576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.036 [2024-11-18 13:30:58.001629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.036 [2024-11-18 13:30:58.001646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.036 [2024-11-18 13:30:58.001652] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.036 "name": "raid_bdev1", 00:14:28.036 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:28.036 "strip_size_kb": 0, 00:14:28.036 "state": "online", 00:14:28.036 "raid_level": "raid1", 00:14:28.036 "superblock": true, 00:14:28.036 "num_base_bdevs": 4, 00:14:28.036 "num_base_bdevs_discovered": 2, 00:14:28.036 "num_base_bdevs_operational": 2, 00:14:28.036 "base_bdevs_list": [ 00:14:28.036 { 00:14:28.036 "name": null, 00:14:28.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.036 "is_configured": false, 00:14:28.036 "data_offset": 0, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": null, 00:14:28.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.036 "is_configured": false, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": "BaseBdev3", 00:14:28.036 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:28.036 "is_configured": true, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 }, 00:14:28.036 { 00:14:28.036 "name": "BaseBdev4", 00:14:28.036 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:28.036 "is_configured": true, 00:14:28.036 "data_offset": 2048, 00:14:28.036 "data_size": 63488 00:14:28.036 } 00:14:28.036 ] 00:14:28.036 }' 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.036 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.605 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.605 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.605 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.605 [2024-11-18 13:30:58.456934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.605 [2024-11-18 13:30:58.457010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.605 [2024-11-18 13:30:58.457036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:28.605 [2024-11-18 13:30:58.457046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.605 [2024-11-18 13:30:58.457496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.605 [2024-11-18 13:30:58.457528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.605 [2024-11-18 13:30:58.457617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:28.605 [2024-11-18 13:30:58.457634] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:28.605 [2024-11-18 13:30:58.457649] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.605 [2024-11-18 13:30:58.457682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.605 [2024-11-18 13:30:58.470581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:28.605 spare 00:14:28.605 13:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.605 13:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:28.605 [2024-11-18 13:30:58.472352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.544 "name": "raid_bdev1", 00:14:29.544 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:29.544 "strip_size_kb": 0, 00:14:29.544 "state": "online", 00:14:29.544 "raid_level": "raid1", 00:14:29.544 "superblock": true, 00:14:29.544 "num_base_bdevs": 4, 00:14:29.544 "num_base_bdevs_discovered": 3, 00:14:29.544 "num_base_bdevs_operational": 3, 00:14:29.544 "process": { 00:14:29.544 "type": "rebuild", 00:14:29.544 "target": "spare", 00:14:29.544 "progress": { 00:14:29.544 "blocks": 20480, 00:14:29.544 "percent": 32 00:14:29.544 } 00:14:29.544 }, 00:14:29.544 "base_bdevs_list": [ 00:14:29.544 { 00:14:29.544 "name": "spare", 00:14:29.544 "uuid": "b774d361-b85c-53f2-be13-d834d460c6ed", 00:14:29.544 "is_configured": true, 00:14:29.544 "data_offset": 2048, 00:14:29.544 "data_size": 63488 00:14:29.544 }, 00:14:29.544 { 00:14:29.544 "name": null, 00:14:29.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.544 "is_configured": false, 00:14:29.544 "data_offset": 2048, 00:14:29.544 "data_size": 63488 00:14:29.544 }, 00:14:29.544 { 00:14:29.544 "name": "BaseBdev3", 00:14:29.544 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:29.544 "is_configured": true, 00:14:29.544 "data_offset": 2048, 00:14:29.544 "data_size": 63488 00:14:29.544 }, 00:14:29.544 { 00:14:29.544 "name": "BaseBdev4", 00:14:29.544 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:29.544 "is_configured": true, 00:14:29.544 "data_offset": 2048, 00:14:29.544 "data_size": 63488 00:14:29.544 } 00:14:29.544 ] 00:14:29.544 }' 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.544 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.804 [2024-11-18 13:30:59.612102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.804 [2024-11-18 13:30:59.676937] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.804 [2024-11-18 13:30:59.676993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.804 [2024-11-18 13:30:59.677007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.804 [2024-11-18 13:30:59.677015] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.804 "name": "raid_bdev1", 00:14:29.804 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:29.804 "strip_size_kb": 0, 00:14:29.804 "state": "online", 00:14:29.804 "raid_level": "raid1", 00:14:29.804 "superblock": true, 00:14:29.804 "num_base_bdevs": 4, 00:14:29.804 "num_base_bdevs_discovered": 2, 00:14:29.804 "num_base_bdevs_operational": 2, 00:14:29.804 "base_bdevs_list": [ 00:14:29.804 { 00:14:29.804 "name": null, 00:14:29.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.804 "is_configured": false, 00:14:29.804 "data_offset": 0, 00:14:29.804 "data_size": 63488 00:14:29.804 }, 00:14:29.804 { 00:14:29.804 "name": null, 00:14:29.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.804 "is_configured": false, 00:14:29.804 "data_offset": 2048, 00:14:29.804 "data_size": 63488 00:14:29.804 }, 00:14:29.804 { 00:14:29.804 "name": "BaseBdev3", 00:14:29.804 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:29.804 "is_configured": true, 00:14:29.804 "data_offset": 2048, 00:14:29.804 "data_size": 63488 00:14:29.804 }, 00:14:29.804 { 00:14:29.804 "name": "BaseBdev4", 00:14:29.804 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:29.804 "is_configured": true, 00:14:29.804 "data_offset": 2048, 00:14:29.804 "data_size": 63488 00:14:29.804 } 00:14:29.804 ] 00:14:29.804 }' 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.804 13:30:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.064 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.324 "name": "raid_bdev1", 00:14:30.324 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:30.324 "strip_size_kb": 0, 00:14:30.324 "state": "online", 00:14:30.324 "raid_level": "raid1", 00:14:30.324 "superblock": true, 00:14:30.324 "num_base_bdevs": 4, 00:14:30.324 "num_base_bdevs_discovered": 2, 00:14:30.324 "num_base_bdevs_operational": 2, 00:14:30.324 "base_bdevs_list": [ 00:14:30.324 { 00:14:30.324 "name": null, 00:14:30.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.324 "is_configured": false, 00:14:30.324 "data_offset": 0, 00:14:30.324 "data_size": 63488 00:14:30.324 }, 00:14:30.324 { 00:14:30.324 "name": null, 00:14:30.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.324 "is_configured": false, 00:14:30.324 "data_offset": 2048, 00:14:30.324 "data_size": 63488 00:14:30.324 }, 00:14:30.324 { 00:14:30.324 "name": "BaseBdev3", 00:14:30.324 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:30.324 "is_configured": true, 00:14:30.324 "data_offset": 2048, 00:14:30.324 "data_size": 63488 00:14:30.324 }, 00:14:30.324 { 00:14:30.324 "name": "BaseBdev4", 00:14:30.324 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:30.324 "is_configured": true, 00:14:30.324 "data_offset": 2048, 00:14:30.324 "data_size": 63488 00:14:30.324 } 00:14:30.324 ] 00:14:30.324 }' 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.324 [2024-11-18 13:31:00.248098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.324 [2024-11-18 13:31:00.248166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.324 [2024-11-18 13:31:00.248185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:30.324 [2024-11-18 13:31:00.248196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.324 [2024-11-18 13:31:00.248612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.324 [2024-11-18 13:31:00.248640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.324 [2024-11-18 13:31:00.248715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:30.324 [2024-11-18 13:31:00.248738] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:30.324 [2024-11-18 13:31:00.248749] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.324 [2024-11-18 13:31:00.248771] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:30.324 BaseBdev1 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.324 13:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.267 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.267 "name": "raid_bdev1", 00:14:31.267 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:31.267 "strip_size_kb": 0, 00:14:31.267 "state": "online", 00:14:31.267 "raid_level": "raid1", 00:14:31.267 "superblock": true, 00:14:31.267 "num_base_bdevs": 4, 00:14:31.267 "num_base_bdevs_discovered": 2, 00:14:31.267 "num_base_bdevs_operational": 2, 00:14:31.267 "base_bdevs_list": [ 00:14:31.267 { 00:14:31.267 "name": null, 00:14:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.267 "is_configured": false, 00:14:31.267 "data_offset": 0, 00:14:31.267 "data_size": 63488 00:14:31.267 }, 00:14:31.267 { 00:14:31.267 "name": null, 00:14:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.267 "is_configured": false, 00:14:31.267 "data_offset": 2048, 00:14:31.267 "data_size": 63488 00:14:31.267 }, 00:14:31.267 { 00:14:31.267 "name": "BaseBdev3", 00:14:31.267 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:31.267 "is_configured": true, 00:14:31.267 "data_offset": 2048, 00:14:31.267 "data_size": 63488 00:14:31.267 }, 00:14:31.267 { 00:14:31.268 "name": "BaseBdev4", 00:14:31.268 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:31.268 "is_configured": true, 00:14:31.268 "data_offset": 2048, 00:14:31.268 "data_size": 63488 00:14:31.268 } 00:14:31.268 ] 00:14:31.268 }' 00:14:31.268 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.268 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.837 "name": "raid_bdev1", 00:14:31.837 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:31.837 "strip_size_kb": 0, 00:14:31.837 "state": "online", 00:14:31.837 "raid_level": "raid1", 00:14:31.837 "superblock": true, 00:14:31.837 "num_base_bdevs": 4, 00:14:31.837 "num_base_bdevs_discovered": 2, 00:14:31.837 "num_base_bdevs_operational": 2, 00:14:31.837 "base_bdevs_list": [ 00:14:31.837 { 00:14:31.837 "name": null, 00:14:31.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.837 "is_configured": false, 00:14:31.837 "data_offset": 0, 00:14:31.837 "data_size": 63488 00:14:31.837 }, 00:14:31.837 { 00:14:31.837 "name": null, 00:14:31.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.837 "is_configured": false, 00:14:31.837 "data_offset": 2048, 00:14:31.837 "data_size": 63488 00:14:31.837 }, 00:14:31.837 { 00:14:31.837 "name": "BaseBdev3", 00:14:31.837 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:31.837 "is_configured": true, 00:14:31.837 "data_offset": 2048, 00:14:31.837 "data_size": 63488 00:14:31.837 }, 00:14:31.837 { 00:14:31.837 "name": "BaseBdev4", 00:14:31.837 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:31.837 "is_configured": true, 00:14:31.837 "data_offset": 2048, 00:14:31.837 "data_size": 63488 00:14:31.837 } 00:14:31.837 ] 00:14:31.837 }' 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.837 [2024-11-18 13:31:01.833400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.837 [2024-11-18 13:31:01.833593] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:31.837 [2024-11-18 13:31:01.833607] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.837 request: 00:14:31.837 { 00:14:31.837 "base_bdev": "BaseBdev1", 00:14:31.837 "raid_bdev": "raid_bdev1", 00:14:31.837 "method": "bdev_raid_add_base_bdev", 00:14:31.837 "req_id": 1 00:14:31.837 } 00:14:31.837 Got JSON-RPC error response 00:14:31.837 response: 00:14:31.837 { 00:14:31.837 "code": -22, 00:14:31.837 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:31.837 } 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.837 13:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.219 "name": "raid_bdev1", 00:14:33.219 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:33.219 "strip_size_kb": 0, 00:14:33.219 "state": "online", 00:14:33.219 "raid_level": "raid1", 00:14:33.219 "superblock": true, 00:14:33.219 "num_base_bdevs": 4, 00:14:33.219 "num_base_bdevs_discovered": 2, 00:14:33.219 "num_base_bdevs_operational": 2, 00:14:33.219 "base_bdevs_list": [ 00:14:33.219 { 00:14:33.219 "name": null, 00:14:33.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.219 "is_configured": false, 00:14:33.219 "data_offset": 0, 00:14:33.219 "data_size": 63488 00:14:33.219 }, 00:14:33.219 { 00:14:33.219 "name": null, 00:14:33.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.219 "is_configured": false, 00:14:33.219 "data_offset": 2048, 00:14:33.219 "data_size": 63488 00:14:33.219 }, 00:14:33.219 { 00:14:33.219 "name": "BaseBdev3", 00:14:33.219 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:33.219 "is_configured": true, 00:14:33.219 "data_offset": 2048, 00:14:33.219 "data_size": 63488 00:14:33.219 }, 00:14:33.219 { 00:14:33.219 "name": "BaseBdev4", 00:14:33.219 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:33.219 "is_configured": true, 00:14:33.219 "data_offset": 2048, 00:14:33.219 "data_size": 63488 00:14:33.219 } 00:14:33.219 ] 00:14:33.219 }' 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.219 13:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.479 "name": "raid_bdev1", 00:14:33.479 "uuid": "4f5270c2-1fcd-43e2-a115-de038f04f7b8", 00:14:33.479 "strip_size_kb": 0, 00:14:33.479 "state": "online", 00:14:33.479 "raid_level": "raid1", 00:14:33.479 "superblock": true, 00:14:33.479 "num_base_bdevs": 4, 00:14:33.479 "num_base_bdevs_discovered": 2, 00:14:33.479 "num_base_bdevs_operational": 2, 00:14:33.479 "base_bdevs_list": [ 00:14:33.479 { 00:14:33.479 "name": null, 00:14:33.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.479 "is_configured": false, 00:14:33.479 "data_offset": 0, 00:14:33.479 "data_size": 63488 00:14:33.479 }, 00:14:33.479 { 00:14:33.479 "name": null, 00:14:33.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.479 "is_configured": false, 00:14:33.479 "data_offset": 2048, 00:14:33.479 "data_size": 63488 00:14:33.479 }, 00:14:33.479 { 00:14:33.479 "name": "BaseBdev3", 00:14:33.479 "uuid": "34242fc3-b38a-5c85-970e-4fcff90dee95", 00:14:33.479 "is_configured": true, 00:14:33.479 "data_offset": 2048, 00:14:33.479 "data_size": 63488 00:14:33.479 }, 00:14:33.479 { 00:14:33.479 "name": "BaseBdev4", 00:14:33.479 "uuid": "ba542553-1c97-5e1f-b13f-f87ef9d94d2b", 00:14:33.479 "is_configured": true, 00:14:33.479 "data_offset": 2048, 00:14:33.479 "data_size": 63488 00:14:33.479 } 00:14:33.479 ] 00:14:33.479 }' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77957 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77957 ']' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77957 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77957 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.479 killing process with pid 77957 00:14:33.479 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.479 00:14:33.479 Latency(us) 00:14:33.479 [2024-11-18T13:31:03.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.479 [2024-11-18T13:31:03.533Z] =================================================================================================================== 00:14:33.479 [2024-11-18T13:31:03.533Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77957' 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77957 00:14:33.479 [2024-11-18 13:31:03.460489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.479 [2024-11-18 13:31:03.460627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.479 13:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77957 00:14:33.479 [2024-11-18 13:31:03.460700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.479 [2024-11-18 13:31:03.460711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:34.049 [2024-11-18 13:31:03.918347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.987 13:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.987 00:14:34.987 real 0m24.456s 00:14:34.987 user 0m29.546s 00:14:34.987 sys 0m3.650s 00:14:34.987 13:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.987 ************************************ 00:14:34.987 END TEST raid_rebuild_test_sb 00:14:34.987 ************************************ 00:14:34.987 13:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.987 13:31:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:34.987 13:31:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.987 13:31:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.987 13:31:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.246 ************************************ 00:14:35.246 START TEST raid_rebuild_test_io 00:14:35.246 ************************************ 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.246 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78705 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78705 00:14:35.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78705 ']' 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.247 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.247 [2024-11-18 13:31:05.141451] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:35.247 [2024-11-18 13:31:05.141647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.247 Zero copy mechanism will not be used. 00:14:35.247 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:14:35.247 [2024-11-18 13:31:05.293056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.506 [2024-11-18 13:31:05.404863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.764 [2024-11-18 13:31:05.604862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.764 [2024-11-18 13:31:05.604970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 BaseBdev1_malloc 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.024 13:31:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 [2024-11-18 13:31:05.999158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.024 [2024-11-18 13:31:05.999315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.024 [2024-11-18 13:31:05.999359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.024 [2024-11-18 13:31:05.999393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.024 [2024-11-18 13:31:06.001389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.024 [2024-11-18 13:31:06.001467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.024 BaseBdev1 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 BaseBdev2_malloc 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 [2024-11-18 13:31:06.052727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.024 [2024-11-18 13:31:06.052867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.024 [2024-11-18 13:31:06.052903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.024 [2024-11-18 13:31:06.052933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.024 [2024-11-18 13:31:06.054885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.024 [2024-11-18 13:31:06.054963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.024 BaseBdev2 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.024 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 BaseBdev3_malloc 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 [2024-11-18 13:31:06.120150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:36.283 [2024-11-18 13:31:06.120274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.283 [2024-11-18 13:31:06.120312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.283 [2024-11-18 13:31:06.120342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.283 [2024-11-18 13:31:06.122257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.283 [2024-11-18 13:31:06.122293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.283 BaseBdev3 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 BaseBdev4_malloc 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.283 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 [2024-11-18 13:31:06.173897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:36.283 [2024-11-18 13:31:06.174019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.283 [2024-11-18 13:31:06.174070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:36.283 [2024-11-18 13:31:06.174099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.284 [2024-11-18 13:31:06.176047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.284 [2024-11-18 13:31:06.176124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:36.284 BaseBdev4 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 spare_malloc 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 spare_delay 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 [2024-11-18 13:31:06.240252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.284 [2024-11-18 13:31:06.240315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.284 [2024-11-18 13:31:06.240334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:36.284 [2024-11-18 13:31:06.240344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.284 [2024-11-18 13:31:06.242294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.284 [2024-11-18 13:31:06.242389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.284 spare 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 [2024-11-18 13:31:06.252289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.284 [2024-11-18 13:31:06.253969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.284 [2024-11-18 13:31:06.254072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.284 [2024-11-18 13:31:06.254152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.284 [2024-11-18 13:31:06.254253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.284 [2024-11-18 13:31:06.254294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:36.284 [2024-11-18 13:31:06.254537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.284 [2024-11-18 13:31:06.254765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.284 [2024-11-18 13:31:06.254812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.284 [2024-11-18 13:31:06.254992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.284 "name": "raid_bdev1", 00:14:36.284 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:36.284 "strip_size_kb": 0, 00:14:36.284 "state": "online", 00:14:36.284 "raid_level": "raid1", 00:14:36.284 "superblock": false, 00:14:36.284 "num_base_bdevs": 4, 00:14:36.284 "num_base_bdevs_discovered": 4, 00:14:36.284 "num_base_bdevs_operational": 4, 00:14:36.284 "base_bdevs_list": [ 00:14:36.284 { 00:14:36.284 "name": "BaseBdev1", 00:14:36.284 "uuid": "a7407445-0256-514d-8a86-c6e929ddea63", 00:14:36.284 "is_configured": true, 00:14:36.284 "data_offset": 0, 00:14:36.284 "data_size": 65536 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "name": "BaseBdev2", 00:14:36.284 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:36.284 "is_configured": true, 00:14:36.284 "data_offset": 0, 00:14:36.284 "data_size": 65536 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "name": "BaseBdev3", 00:14:36.284 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:36.284 "is_configured": true, 00:14:36.284 "data_offset": 0, 00:14:36.284 "data_size": 65536 00:14:36.284 }, 00:14:36.284 { 00:14:36.284 "name": "BaseBdev4", 00:14:36.284 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:36.284 "is_configured": true, 00:14:36.284 "data_offset": 0, 00:14:36.284 "data_size": 65536 00:14:36.284 } 00:14:36.284 ] 00:14:36.284 }' 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.284 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.853 [2024-11-18 13:31:06.687891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.853 [2024-11-18 13:31:06.779353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.853 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.853 "name": "raid_bdev1", 00:14:36.853 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:36.853 "strip_size_kb": 0, 00:14:36.853 "state": "online", 00:14:36.853 "raid_level": "raid1", 00:14:36.853 "superblock": false, 00:14:36.853 "num_base_bdevs": 4, 00:14:36.853 "num_base_bdevs_discovered": 3, 00:14:36.853 "num_base_bdevs_operational": 3, 00:14:36.853 "base_bdevs_list": [ 00:14:36.853 { 00:14:36.853 "name": null, 00:14:36.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.853 "is_configured": false, 00:14:36.853 "data_offset": 0, 00:14:36.853 "data_size": 65536 00:14:36.853 }, 00:14:36.853 { 00:14:36.853 "name": "BaseBdev2", 00:14:36.853 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:36.853 "is_configured": true, 00:14:36.853 "data_offset": 0, 00:14:36.853 "data_size": 65536 00:14:36.853 }, 00:14:36.853 { 00:14:36.853 "name": "BaseBdev3", 00:14:36.853 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:36.853 "is_configured": true, 00:14:36.853 "data_offset": 0, 00:14:36.853 "data_size": 65536 00:14:36.853 }, 00:14:36.853 { 00:14:36.853 "name": "BaseBdev4", 00:14:36.853 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:36.853 "is_configured": true, 00:14:36.854 "data_offset": 0, 00:14:36.854 "data_size": 65536 00:14:36.854 } 00:14:36.854 ] 00:14:36.854 }' 00:14:36.854 13:31:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.854 13:31:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.854 [2024-11-18 13:31:06.874808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:36.854 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.854 Zero copy mechanism will not be used. 00:14:36.854 Running I/O for 60 seconds... 00:14:37.477 13:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:37.477 13:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.477 13:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.477 [2024-11-18 13:31:07.213944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.477 13:31:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.477 13:31:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.477 [2024-11-18 13:31:07.280145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:37.477 [2024-11-18 13:31:07.282015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:37.477 [2024-11-18 13:31:07.404327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:37.477 [2024-11-18 13:31:07.404881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:37.477 [2024-11-18 13:31:07.521004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:37.477 [2024-11-18 13:31:07.521333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.048 196.00 IOPS, 588.00 MiB/s [2024-11-18T13:31:08.102Z] [2024-11-18 13:31:07.893867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.048 [2024-11-18 13:31:07.894627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:38.308 [2024-11-18 13:31:08.225128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:38.308 [2024-11-18 13:31:08.226437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.308 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.308 "name": "raid_bdev1", 00:14:38.308 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:38.308 "strip_size_kb": 0, 00:14:38.308 "state": "online", 00:14:38.308 "raid_level": "raid1", 00:14:38.308 "superblock": false, 00:14:38.308 "num_base_bdevs": 4, 00:14:38.308 "num_base_bdevs_discovered": 4, 00:14:38.308 "num_base_bdevs_operational": 4, 00:14:38.308 "process": { 00:14:38.308 "type": "rebuild", 00:14:38.308 "target": "spare", 00:14:38.308 "progress": { 00:14:38.308 "blocks": 14336, 00:14:38.308 "percent": 21 00:14:38.308 } 00:14:38.308 }, 00:14:38.308 "base_bdevs_list": [ 00:14:38.308 { 00:14:38.308 "name": "spare", 00:14:38.308 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:38.308 "is_configured": true, 00:14:38.308 "data_offset": 0, 00:14:38.308 "data_size": 65536 00:14:38.308 }, 00:14:38.308 { 00:14:38.308 "name": "BaseBdev2", 00:14:38.308 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:38.308 "is_configured": true, 00:14:38.308 "data_offset": 0, 00:14:38.308 "data_size": 65536 00:14:38.308 }, 00:14:38.308 { 00:14:38.308 "name": "BaseBdev3", 00:14:38.308 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:38.308 "is_configured": true, 00:14:38.308 "data_offset": 0, 00:14:38.309 "data_size": 65536 00:14:38.309 }, 00:14:38.309 { 00:14:38.309 "name": "BaseBdev4", 00:14:38.309 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:38.309 "is_configured": true, 00:14:38.309 "data_offset": 0, 00:14:38.309 "data_size": 65536 00:14:38.309 } 00:14:38.309 ] 00:14:38.309 }' 00:14:38.309 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.309 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.309 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.568 [2024-11-18 13:31:08.398626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.568 [2024-11-18 13:31:08.448008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:38.568 [2024-11-18 13:31:08.448772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:38.568 [2024-11-18 13:31:08.556454] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.568 [2024-11-18 13:31:08.560260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.568 [2024-11-18 13:31:08.560304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.568 [2024-11-18 13:31:08.560319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.568 [2024-11-18 13:31:08.600788] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.568 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.828 "name": "raid_bdev1", 00:14:38.828 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:38.828 "strip_size_kb": 0, 00:14:38.828 "state": "online", 00:14:38.828 "raid_level": "raid1", 00:14:38.828 "superblock": false, 00:14:38.828 "num_base_bdevs": 4, 00:14:38.828 "num_base_bdevs_discovered": 3, 00:14:38.828 "num_base_bdevs_operational": 3, 00:14:38.828 "base_bdevs_list": [ 00:14:38.828 { 00:14:38.828 "name": null, 00:14:38.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.828 "is_configured": false, 00:14:38.828 "data_offset": 0, 00:14:38.828 "data_size": 65536 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "name": "BaseBdev2", 00:14:38.828 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:38.828 "is_configured": true, 00:14:38.828 "data_offset": 0, 00:14:38.828 "data_size": 65536 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "name": "BaseBdev3", 00:14:38.828 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:38.828 "is_configured": true, 00:14:38.828 "data_offset": 0, 00:14:38.828 "data_size": 65536 00:14:38.828 }, 00:14:38.828 { 00:14:38.828 "name": "BaseBdev4", 00:14:38.828 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:38.828 "is_configured": true, 00:14:38.828 "data_offset": 0, 00:14:38.828 "data_size": 65536 00:14:38.828 } 00:14:38.828 ] 00:14:38.828 }' 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.828 13:31:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.087 166.00 IOPS, 498.00 MiB/s [2024-11-18T13:31:09.141Z] 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.087 "name": "raid_bdev1", 00:14:39.087 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:39.087 "strip_size_kb": 0, 00:14:39.087 "state": "online", 00:14:39.087 "raid_level": "raid1", 00:14:39.087 "superblock": false, 00:14:39.087 "num_base_bdevs": 4, 00:14:39.087 "num_base_bdevs_discovered": 3, 00:14:39.087 "num_base_bdevs_operational": 3, 00:14:39.087 "base_bdevs_list": [ 00:14:39.087 { 00:14:39.087 "name": null, 00:14:39.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.087 "is_configured": false, 00:14:39.087 "data_offset": 0, 00:14:39.087 "data_size": 65536 00:14:39.087 }, 00:14:39.087 { 00:14:39.087 "name": "BaseBdev2", 00:14:39.087 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:39.087 "is_configured": true, 00:14:39.087 "data_offset": 0, 00:14:39.087 "data_size": 65536 00:14:39.087 }, 00:14:39.087 { 00:14:39.087 "name": "BaseBdev3", 00:14:39.087 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:39.087 "is_configured": true, 00:14:39.087 "data_offset": 0, 00:14:39.087 "data_size": 65536 00:14:39.087 }, 00:14:39.087 { 00:14:39.087 "name": "BaseBdev4", 00:14:39.087 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:39.087 "is_configured": true, 00:14:39.087 "data_offset": 0, 00:14:39.087 "data_size": 65536 00:14:39.087 } 00:14:39.087 ] 00:14:39.087 }' 00:14:39.087 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.346 [2024-11-18 13:31:09.225965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.346 13:31:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:39.346 [2024-11-18 13:31:09.298490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:39.346 [2024-11-18 13:31:09.300432] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.605 [2024-11-18 13:31:09.415627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.605 [2024-11-18 13:31:09.416015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.605 [2024-11-18 13:31:09.623826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.605 [2024-11-18 13:31:09.624056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.126 161.00 IOPS, 483.00 MiB/s [2024-11-18T13:31:10.180Z] [2024-11-18 13:31:09.946295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:40.126 [2024-11-18 13:31:09.946805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:40.126 [2024-11-18 13:31:10.081426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:40.126 [2024-11-18 13:31:10.082187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.386 "name": "raid_bdev1", 00:14:40.386 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:40.386 "strip_size_kb": 0, 00:14:40.386 "state": "online", 00:14:40.386 "raid_level": "raid1", 00:14:40.386 "superblock": false, 00:14:40.386 "num_base_bdevs": 4, 00:14:40.386 "num_base_bdevs_discovered": 4, 00:14:40.386 "num_base_bdevs_operational": 4, 00:14:40.386 "process": { 00:14:40.386 "type": "rebuild", 00:14:40.386 "target": "spare", 00:14:40.386 "progress": { 00:14:40.386 "blocks": 10240, 00:14:40.386 "percent": 15 00:14:40.386 } 00:14:40.386 }, 00:14:40.386 "base_bdevs_list": [ 00:14:40.386 { 00:14:40.386 "name": "spare", 00:14:40.386 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:40.386 "is_configured": true, 00:14:40.386 "data_offset": 0, 00:14:40.386 "data_size": 65536 00:14:40.386 }, 00:14:40.386 { 00:14:40.386 "name": "BaseBdev2", 00:14:40.386 "uuid": "bcdbc697-e455-520b-9dab-992cfb16fa09", 00:14:40.386 "is_configured": true, 00:14:40.386 "data_offset": 0, 00:14:40.386 "data_size": 65536 00:14:40.386 }, 00:14:40.386 { 00:14:40.386 "name": "BaseBdev3", 00:14:40.386 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:40.386 "is_configured": true, 00:14:40.386 "data_offset": 0, 00:14:40.386 "data_size": 65536 00:14:40.386 }, 00:14:40.386 { 00:14:40.386 "name": "BaseBdev4", 00:14:40.386 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:40.386 "is_configured": true, 00:14:40.386 "data_offset": 0, 00:14:40.386 "data_size": 65536 00:14:40.386 } 00:14:40.386 ] 00:14:40.386 }' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.386 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.646 [2024-11-18 13:31:10.437984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.646 [2024-11-18 13:31:10.540226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:40.646 [2024-11-18 13:31:10.546232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:40.646 [2024-11-18 13:31:10.654579] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:40.646 [2024-11-18 13:31:10.654617] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.646 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.905 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.905 "name": "raid_bdev1", 00:14:40.905 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:40.905 "strip_size_kb": 0, 00:14:40.905 "state": "online", 00:14:40.905 "raid_level": "raid1", 00:14:40.905 "superblock": false, 00:14:40.905 "num_base_bdevs": 4, 00:14:40.905 "num_base_bdevs_discovered": 3, 00:14:40.905 "num_base_bdevs_operational": 3, 00:14:40.905 "process": { 00:14:40.905 "type": "rebuild", 00:14:40.905 "target": "spare", 00:14:40.905 "progress": { 00:14:40.905 "blocks": 16384, 00:14:40.905 "percent": 25 00:14:40.905 } 00:14:40.905 }, 00:14:40.905 "base_bdevs_list": [ 00:14:40.905 { 00:14:40.905 "name": "spare", 00:14:40.905 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:40.905 "is_configured": true, 00:14:40.905 "data_offset": 0, 00:14:40.905 "data_size": 65536 00:14:40.905 }, 00:14:40.905 { 00:14:40.905 "name": null, 00:14:40.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.905 "is_configured": false, 00:14:40.905 "data_offset": 0, 00:14:40.905 "data_size": 65536 00:14:40.905 }, 00:14:40.905 { 00:14:40.905 "name": "BaseBdev3", 00:14:40.905 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:40.905 "is_configured": true, 00:14:40.905 "data_offset": 0, 00:14:40.905 "data_size": 65536 00:14:40.905 }, 00:14:40.905 { 00:14:40.905 "name": "BaseBdev4", 00:14:40.905 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:40.905 "is_configured": true, 00:14:40.905 "data_offset": 0, 00:14:40.905 "data_size": 65536 00:14:40.905 } 00:14:40.906 ] 00:14:40.906 }' 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.906 "name": "raid_bdev1", 00:14:40.906 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:40.906 "strip_size_kb": 0, 00:14:40.906 "state": "online", 00:14:40.906 "raid_level": "raid1", 00:14:40.906 "superblock": false, 00:14:40.906 "num_base_bdevs": 4, 00:14:40.906 "num_base_bdevs_discovered": 3, 00:14:40.906 "num_base_bdevs_operational": 3, 00:14:40.906 "process": { 00:14:40.906 "type": "rebuild", 00:14:40.906 "target": "spare", 00:14:40.906 "progress": { 00:14:40.906 "blocks": 18432, 00:14:40.906 "percent": 28 00:14:40.906 } 00:14:40.906 }, 00:14:40.906 "base_bdevs_list": [ 00:14:40.906 { 00:14:40.906 "name": "spare", 00:14:40.906 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:40.906 "is_configured": true, 00:14:40.906 "data_offset": 0, 00:14:40.906 "data_size": 65536 00:14:40.906 }, 00:14:40.906 { 00:14:40.906 "name": null, 00:14:40.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.906 "is_configured": false, 00:14:40.906 "data_offset": 0, 00:14:40.906 "data_size": 65536 00:14:40.906 }, 00:14:40.906 { 00:14:40.906 "name": "BaseBdev3", 00:14:40.906 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:40.906 "is_configured": true, 00:14:40.906 "data_offset": 0, 00:14:40.906 "data_size": 65536 00:14:40.906 }, 00:14:40.906 { 00:14:40.906 "name": "BaseBdev4", 00:14:40.906 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:40.906 "is_configured": true, 00:14:40.906 "data_offset": 0, 00:14:40.906 "data_size": 65536 00:14:40.906 } 00:14:40.906 ] 00:14:40.906 }' 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.906 134.25 IOPS, 402.75 MiB/s [2024-11-18T13:31:10.960Z] 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.906 [2024-11-18 13:31:10.894417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.906 13:31:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.165 [2024-11-18 13:31:11.015240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:41.734 [2024-11-18 13:31:11.661231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:41.734 [2024-11-18 13:31:11.662078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:41.994 [2024-11-18 13:31:11.868988] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:41.994 118.00 IOPS, 354.00 MiB/s [2024-11-18T13:31:12.048Z] 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.994 "name": "raid_bdev1", 00:14:41.994 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:41.994 "strip_size_kb": 0, 00:14:41.994 "state": "online", 00:14:41.994 "raid_level": "raid1", 00:14:41.994 "superblock": false, 00:14:41.994 "num_base_bdevs": 4, 00:14:41.994 "num_base_bdevs_discovered": 3, 00:14:41.994 "num_base_bdevs_operational": 3, 00:14:41.994 "process": { 00:14:41.994 "type": "rebuild", 00:14:41.994 "target": "spare", 00:14:41.994 "progress": { 00:14:41.994 "blocks": 34816, 00:14:41.994 "percent": 53 00:14:41.994 } 00:14:41.994 }, 00:14:41.994 "base_bdevs_list": [ 00:14:41.994 { 00:14:41.994 "name": "spare", 00:14:41.994 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:41.994 "is_configured": true, 00:14:41.994 "data_offset": 0, 00:14:41.994 "data_size": 65536 00:14:41.994 }, 00:14:41.994 { 00:14:41.994 "name": null, 00:14:41.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.994 "is_configured": false, 00:14:41.994 "data_offset": 0, 00:14:41.994 "data_size": 65536 00:14:41.994 }, 00:14:41.994 { 00:14:41.994 "name": "BaseBdev3", 00:14:41.994 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:41.994 "is_configured": true, 00:14:41.994 "data_offset": 0, 00:14:41.994 "data_size": 65536 00:14:41.994 }, 00:14:41.994 { 00:14:41.994 "name": "BaseBdev4", 00:14:41.994 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:41.994 "is_configured": true, 00:14:41.994 "data_offset": 0, 00:14:41.994 "data_size": 65536 00:14:41.994 } 00:14:41.994 ] 00:14:41.994 }' 00:14:41.994 13:31:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.994 13:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.254 13:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.254 13:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.254 13:31:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.254 [2024-11-18 13:31:12.192748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:42.513 [2024-11-18 13:31:12.408794] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:42.513 [2024-11-18 13:31:12.409111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:43.082 107.67 IOPS, 323.00 MiB/s [2024-11-18T13:31:13.136Z] [2024-11-18 13:31:13.087120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.082 13:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.341 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.341 "name": "raid_bdev1", 00:14:43.341 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:43.341 "strip_size_kb": 0, 00:14:43.341 "state": "online", 00:14:43.341 "raid_level": "raid1", 00:14:43.341 "superblock": false, 00:14:43.341 "num_base_bdevs": 4, 00:14:43.341 "num_base_bdevs_discovered": 3, 00:14:43.341 "num_base_bdevs_operational": 3, 00:14:43.341 "process": { 00:14:43.341 "type": "rebuild", 00:14:43.342 "target": "spare", 00:14:43.342 "progress": { 00:14:43.342 "blocks": 53248, 00:14:43.342 "percent": 81 00:14:43.342 } 00:14:43.342 }, 00:14:43.342 "base_bdevs_list": [ 00:14:43.342 { 00:14:43.342 "name": "spare", 00:14:43.342 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:43.342 "is_configured": true, 00:14:43.342 "data_offset": 0, 00:14:43.342 "data_size": 65536 00:14:43.342 }, 00:14:43.342 { 00:14:43.342 "name": null, 00:14:43.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.342 "is_configured": false, 00:14:43.342 "data_offset": 0, 00:14:43.342 "data_size": 65536 00:14:43.342 }, 00:14:43.342 { 00:14:43.342 "name": "BaseBdev3", 00:14:43.342 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:43.342 "is_configured": true, 00:14:43.342 "data_offset": 0, 00:14:43.342 "data_size": 65536 00:14:43.342 }, 00:14:43.342 { 00:14:43.342 "name": "BaseBdev4", 00:14:43.342 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:43.342 "is_configured": true, 00:14:43.342 "data_offset": 0, 00:14:43.342 "data_size": 65536 00:14:43.342 } 00:14:43.342 ] 00:14:43.342 }' 00:14:43.342 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.342 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.342 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.342 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.342 13:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.910 [2024-11-18 13:31:13.848300] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:43.910 96.00 IOPS, 288.00 MiB/s [2024-11-18T13:31:13.964Z] [2024-11-18 13:31:13.953489] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:43.910 [2024-11-18 13:31:13.957339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.479 "name": "raid_bdev1", 00:14:44.479 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:44.479 "strip_size_kb": 0, 00:14:44.479 "state": "online", 00:14:44.479 "raid_level": "raid1", 00:14:44.479 "superblock": false, 00:14:44.479 "num_base_bdevs": 4, 00:14:44.479 "num_base_bdevs_discovered": 3, 00:14:44.479 "num_base_bdevs_operational": 3, 00:14:44.479 "base_bdevs_list": [ 00:14:44.479 { 00:14:44.479 "name": "spare", 00:14:44.479 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": null, 00:14:44.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.479 "is_configured": false, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": "BaseBdev3", 00:14:44.479 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": "BaseBdev4", 00:14:44.479 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 } 00:14:44.479 ] 00:14:44.479 }' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.479 "name": "raid_bdev1", 00:14:44.479 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:44.479 "strip_size_kb": 0, 00:14:44.479 "state": "online", 00:14:44.479 "raid_level": "raid1", 00:14:44.479 "superblock": false, 00:14:44.479 "num_base_bdevs": 4, 00:14:44.479 "num_base_bdevs_discovered": 3, 00:14:44.479 "num_base_bdevs_operational": 3, 00:14:44.479 "base_bdevs_list": [ 00:14:44.479 { 00:14:44.479 "name": "spare", 00:14:44.479 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": null, 00:14:44.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.479 "is_configured": false, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": "BaseBdev3", 00:14:44.479 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 }, 00:14:44.479 { 00:14:44.479 "name": "BaseBdev4", 00:14:44.479 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:44.479 "is_configured": true, 00:14:44.479 "data_offset": 0, 00:14:44.479 "data_size": 65536 00:14:44.479 } 00:14:44.479 ] 00:14:44.479 }' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.479 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.742 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.742 "name": "raid_bdev1", 00:14:44.742 "uuid": "eff8314c-841e-46eb-ab3a-1fa7b80aa82e", 00:14:44.742 "strip_size_kb": 0, 00:14:44.742 "state": "online", 00:14:44.742 "raid_level": "raid1", 00:14:44.742 "superblock": false, 00:14:44.742 "num_base_bdevs": 4, 00:14:44.742 "num_base_bdevs_discovered": 3, 00:14:44.742 "num_base_bdevs_operational": 3, 00:14:44.742 "base_bdevs_list": [ 00:14:44.742 { 00:14:44.742 "name": "spare", 00:14:44.742 "uuid": "99b06115-dc43-5872-9aba-627cf35c6345", 00:14:44.742 "is_configured": true, 00:14:44.742 "data_offset": 0, 00:14:44.742 "data_size": 65536 00:14:44.742 }, 00:14:44.742 { 00:14:44.742 "name": null, 00:14:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.742 "is_configured": false, 00:14:44.742 "data_offset": 0, 00:14:44.742 "data_size": 65536 00:14:44.742 }, 00:14:44.742 { 00:14:44.742 "name": "BaseBdev3", 00:14:44.742 "uuid": "e4f864cc-16c9-57b0-b182-ce5069698e98", 00:14:44.742 "is_configured": true, 00:14:44.742 "data_offset": 0, 00:14:44.742 "data_size": 65536 00:14:44.742 }, 00:14:44.742 { 00:14:44.743 "name": "BaseBdev4", 00:14:44.743 "uuid": "e82317fd-7b0c-5a30-bdbe-fc996a78b30b", 00:14:44.743 "is_configured": true, 00:14:44.743 "data_offset": 0, 00:14:44.743 "data_size": 65536 00:14:44.743 } 00:14:44.743 ] 00:14:44.743 }' 00:14:44.743 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.743 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.003 87.50 IOPS, 262.50 MiB/s [2024-11-18T13:31:15.057Z] 13:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.003 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.003 13:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.003 [2024-11-18 13:31:14.985523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.003 [2024-11-18 13:31:14.985563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.003 00:14:45.003 Latency(us) 00:14:45.003 [2024-11-18T13:31:15.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.004 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:45.004 raid_bdev1 : 8.17 86.88 260.63 0.00 0.00 15591.42 309.44 111268.11 00:14:45.004 [2024-11-18T13:31:15.058Z] =================================================================================================================== 00:14:45.004 [2024-11-18T13:31:15.058Z] Total : 86.88 260.63 0.00 0.00 15591.42 309.44 111268.11 00:14:45.004 [2024-11-18 13:31:15.053951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.004 [2024-11-18 13:31:15.053998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.004 [2024-11-18 13:31:15.054099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.004 [2024-11-18 13:31:15.054110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:45.004 { 00:14:45.004 "results": [ 00:14:45.004 { 00:14:45.004 "job": "raid_bdev1", 00:14:45.004 "core_mask": "0x1", 00:14:45.004 "workload": "randrw", 00:14:45.004 "percentage": 50, 00:14:45.004 "status": "finished", 00:14:45.004 "queue_depth": 2, 00:14:45.004 "io_size": 3145728, 00:14:45.004 "runtime": 8.172374, 00:14:45.004 "iops": 86.87806015730557, 00:14:45.004 "mibps": 260.6341804719167, 00:14:45.004 "io_failed": 0, 00:14:45.004 "io_timeout": 0, 00:14:45.004 "avg_latency_us": 15591.421858662892, 00:14:45.004 "min_latency_us": 309.435807860262, 00:14:45.004 "max_latency_us": 111268.10829694323 00:14:45.004 } 00:14:45.004 ], 00:14:45.004 "core_count": 1 00:14:45.004 } 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.264 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:45.264 /dev/nbd0 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.524 1+0 records in 00:14:45.524 1+0 records out 00:14:45.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431133 s, 9.5 MB/s 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.524 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:45.524 /dev/nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.784 1+0 records in 00:14:45.784 1+0 records out 00:14:45.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00220663 s, 1.9 MB/s 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.784 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.044 13:31:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:46.305 /dev/nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.305 1+0 records in 00:14:46.305 1+0 records out 00:14:46.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299277 s, 13.7 MB/s 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.305 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.564 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78705 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78705 ']' 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78705 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78705 00:14:46.822 killing process with pid 78705 00:14:46.822 Received shutdown signal, test time was about 9.902795 seconds 00:14:46.822 00:14:46.822 Latency(us) 00:14:46.822 [2024-11-18T13:31:16.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.822 [2024-11-18T13:31:16.876Z] =================================================================================================================== 00:14:46.822 [2024-11-18T13:31:16.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.822 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.823 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78705' 00:14:46.823 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78705 00:14:46.823 [2024-11-18 13:31:16.760446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.823 13:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78705 00:14:47.392 [2024-11-18 13:31:17.157342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:48.328 00:14:48.328 real 0m13.217s 00:14:48.328 user 0m16.673s 00:14:48.328 sys 0m1.832s 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.328 ************************************ 00:14:48.328 END TEST raid_rebuild_test_io 00:14:48.328 ************************************ 00:14:48.328 13:31:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:48.328 13:31:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:48.328 13:31:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.328 13:31:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.328 ************************************ 00:14:48.328 START TEST raid_rebuild_test_sb_io 00:14:48.328 ************************************ 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79114 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79114 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79114 ']' 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.328 13:31:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.586 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:48.586 Zero copy mechanism will not be used. 00:14:48.586 [2024-11-18 13:31:18.446307] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:48.586 [2024-11-18 13:31:18.446434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79114 ] 00:14:48.586 [2024-11-18 13:31:18.626550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.845 [2024-11-18 13:31:18.736543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.104 [2024-11-18 13:31:18.926615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.104 [2024-11-18 13:31:18.926655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.367 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 BaseBdev1_malloc 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 [2024-11-18 13:31:19.330358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:49.368 [2024-11-18 13:31:19.330537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.368 [2024-11-18 13:31:19.330584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:49.368 [2024-11-18 13:31:19.330616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.368 [2024-11-18 13:31:19.332689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.368 [2024-11-18 13:31:19.332767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.368 BaseBdev1 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.368 BaseBdev2_malloc 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.368 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.369 [2024-11-18 13:31:19.384571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:49.369 [2024-11-18 13:31:19.384704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.369 [2024-11-18 13:31:19.384741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:49.369 [2024-11-18 13:31:19.384772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.369 [2024-11-18 13:31:19.386747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.369 [2024-11-18 13:31:19.386822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.369 BaseBdev2 00:14:49.369 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.369 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.369 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.369 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.369 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 BaseBdev3_malloc 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-11-18 13:31:19.473229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:49.629 [2024-11-18 13:31:19.473340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.629 [2024-11-18 13:31:19.473397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:49.629 [2024-11-18 13:31:19.473427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.629 [2024-11-18 13:31:19.475373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.629 [2024-11-18 13:31:19.475450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:49.629 BaseBdev3 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 BaseBdev4_malloc 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-11-18 13:31:19.525264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:49.629 [2024-11-18 13:31:19.525371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.629 [2024-11-18 13:31:19.525423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:49.629 [2024-11-18 13:31:19.525453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.629 [2024-11-18 13:31:19.527401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.629 [2024-11-18 13:31:19.527478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:49.629 BaseBdev4 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 spare_malloc 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 spare_delay 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-11-18 13:31:19.590117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.629 [2024-11-18 13:31:19.590258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.629 [2024-11-18 13:31:19.590296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:49.629 [2024-11-18 13:31:19.590326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.629 [2024-11-18 13:31:19.592275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.629 [2024-11-18 13:31:19.592368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.629 spare 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-11-18 13:31:19.602156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.629 [2024-11-18 13:31:19.603887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.629 [2024-11-18 13:31:19.603991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.629 [2024-11-18 13:31:19.604072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.629 [2024-11-18 13:31:19.604278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.629 [2024-11-18 13:31:19.604328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.629 [2024-11-18 13:31:19.604577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:49.629 [2024-11-18 13:31:19.604780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.629 [2024-11-18 13:31:19.604824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.629 [2024-11-18 13:31:19.605006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.629 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.630 "name": "raid_bdev1", 00:14:49.630 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:49.630 "strip_size_kb": 0, 00:14:49.630 "state": "online", 00:14:49.630 "raid_level": "raid1", 00:14:49.630 "superblock": true, 00:14:49.630 "num_base_bdevs": 4, 00:14:49.630 "num_base_bdevs_discovered": 4, 00:14:49.630 "num_base_bdevs_operational": 4, 00:14:49.630 "base_bdevs_list": [ 00:14:49.630 { 00:14:49.630 "name": "BaseBdev1", 00:14:49.630 "uuid": "e22ee326-8e5d-5407-b3e6-606bf9b383f0", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 2048, 00:14:49.630 "data_size": 63488 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": "BaseBdev2", 00:14:49.630 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 2048, 00:14:49.630 "data_size": 63488 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": "BaseBdev3", 00:14:49.630 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 2048, 00:14:49.630 "data_size": 63488 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": "BaseBdev4", 00:14:49.630 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 2048, 00:14:49.630 "data_size": 63488 00:14:49.630 } 00:14:49.630 ] 00:14:49.630 }' 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.630 13:31:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 [2024-11-18 13:31:20.049657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 [2024-11-18 13:31:20.141230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.196 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.197 "name": "raid_bdev1", 00:14:50.197 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:50.197 "strip_size_kb": 0, 00:14:50.197 "state": "online", 00:14:50.197 "raid_level": "raid1", 00:14:50.197 "superblock": true, 00:14:50.197 "num_base_bdevs": 4, 00:14:50.197 "num_base_bdevs_discovered": 3, 00:14:50.197 "num_base_bdevs_operational": 3, 00:14:50.197 "base_bdevs_list": [ 00:14:50.197 { 00:14:50.197 "name": null, 00:14:50.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.197 "is_configured": false, 00:14:50.197 "data_offset": 0, 00:14:50.197 "data_size": 63488 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev2", 00:14:50.197 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 2048, 00:14:50.197 "data_size": 63488 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev3", 00:14:50.197 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 2048, 00:14:50.197 "data_size": 63488 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev4", 00:14:50.197 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 2048, 00:14:50.197 "data_size": 63488 00:14:50.197 } 00:14:50.197 ] 00:14:50.197 }' 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.197 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 [2024-11-18 13:31:20.233349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:50.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.197 Zero copy mechanism will not be used. 00:14:50.197 Running I/O for 60 seconds... 00:14:50.766 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.766 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.766 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.766 [2024-11-18 13:31:20.587440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.766 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.766 13:31:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:50.766 [2024-11-18 13:31:20.677887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:50.766 [2024-11-18 13:31:20.679891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.766 [2024-11-18 13:31:20.801547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:50.766 [2024-11-18 13:31:20.802034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:51.026 [2024-11-18 13:31:21.019193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.026 [2024-11-18 13:31:21.019583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:51.543 165.00 IOPS, 495.00 MiB/s [2024-11-18T13:31:21.597Z] [2024-11-18 13:31:21.349787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:51.543 [2024-11-18 13:31:21.580512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.543 [2024-11-18 13:31:21.581295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:51.803 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.804 "name": "raid_bdev1", 00:14:51.804 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:51.804 "strip_size_kb": 0, 00:14:51.804 "state": "online", 00:14:51.804 "raid_level": "raid1", 00:14:51.804 "superblock": true, 00:14:51.804 "num_base_bdevs": 4, 00:14:51.804 "num_base_bdevs_discovered": 4, 00:14:51.804 "num_base_bdevs_operational": 4, 00:14:51.804 "process": { 00:14:51.804 "type": "rebuild", 00:14:51.804 "target": "spare", 00:14:51.804 "progress": { 00:14:51.804 "blocks": 10240, 00:14:51.804 "percent": 16 00:14:51.804 } 00:14:51.804 }, 00:14:51.804 "base_bdevs_list": [ 00:14:51.804 { 00:14:51.804 "name": "spare", 00:14:51.804 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:51.804 "is_configured": true, 00:14:51.804 "data_offset": 2048, 00:14:51.804 "data_size": 63488 00:14:51.804 }, 00:14:51.804 { 00:14:51.804 "name": "BaseBdev2", 00:14:51.804 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:51.804 "is_configured": true, 00:14:51.804 "data_offset": 2048, 00:14:51.804 "data_size": 63488 00:14:51.804 }, 00:14:51.804 { 00:14:51.804 "name": "BaseBdev3", 00:14:51.804 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:51.804 "is_configured": true, 00:14:51.804 "data_offset": 2048, 00:14:51.804 "data_size": 63488 00:14:51.804 }, 00:14:51.804 { 00:14:51.804 "name": "BaseBdev4", 00:14:51.804 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:51.804 "is_configured": true, 00:14:51.804 "data_offset": 2048, 00:14:51.804 "data_size": 63488 00:14:51.804 } 00:14:51.804 ] 00:14:51.804 }' 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.804 13:31:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.804 [2024-11-18 13:31:21.807220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.064 [2024-11-18 13:31:21.907497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:52.064 [2024-11-18 13:31:22.009531] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:52.064 [2024-11-18 13:31:22.012698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.064 [2024-11-18 13:31:22.012737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.064 [2024-11-18 13:31:22.012750] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:52.064 [2024-11-18 13:31:22.040717] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.064 "name": "raid_bdev1", 00:14:52.064 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:52.064 "strip_size_kb": 0, 00:14:52.064 "state": "online", 00:14:52.064 "raid_level": "raid1", 00:14:52.064 "superblock": true, 00:14:52.064 "num_base_bdevs": 4, 00:14:52.064 "num_base_bdevs_discovered": 3, 00:14:52.064 "num_base_bdevs_operational": 3, 00:14:52.064 "base_bdevs_list": [ 00:14:52.064 { 00:14:52.064 "name": null, 00:14:52.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.064 "is_configured": false, 00:14:52.064 "data_offset": 0, 00:14:52.064 "data_size": 63488 00:14:52.064 }, 00:14:52.064 { 00:14:52.064 "name": "BaseBdev2", 00:14:52.064 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 }, 00:14:52.064 { 00:14:52.064 "name": "BaseBdev3", 00:14:52.064 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 }, 00:14:52.064 { 00:14:52.064 "name": "BaseBdev4", 00:14:52.064 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:52.064 "is_configured": true, 00:14:52.064 "data_offset": 2048, 00:14:52.064 "data_size": 63488 00:14:52.064 } 00:14:52.064 ] 00:14:52.064 }' 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.064 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.583 141.00 IOPS, 423.00 MiB/s [2024-11-18T13:31:22.637Z] 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.583 "name": "raid_bdev1", 00:14:52.583 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:52.583 "strip_size_kb": 0, 00:14:52.583 "state": "online", 00:14:52.583 "raid_level": "raid1", 00:14:52.583 "superblock": true, 00:14:52.583 "num_base_bdevs": 4, 00:14:52.583 "num_base_bdevs_discovered": 3, 00:14:52.583 "num_base_bdevs_operational": 3, 00:14:52.583 "base_bdevs_list": [ 00:14:52.583 { 00:14:52.583 "name": null, 00:14:52.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.583 "is_configured": false, 00:14:52.583 "data_offset": 0, 00:14:52.583 "data_size": 63488 00:14:52.583 }, 00:14:52.583 { 00:14:52.583 "name": "BaseBdev2", 00:14:52.583 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:52.583 "is_configured": true, 00:14:52.583 "data_offset": 2048, 00:14:52.583 "data_size": 63488 00:14:52.583 }, 00:14:52.583 { 00:14:52.583 "name": "BaseBdev3", 00:14:52.583 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:52.583 "is_configured": true, 00:14:52.583 "data_offset": 2048, 00:14:52.583 "data_size": 63488 00:14:52.583 }, 00:14:52.583 { 00:14:52.583 "name": "BaseBdev4", 00:14:52.583 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:52.583 "is_configured": true, 00:14:52.583 "data_offset": 2048, 00:14:52.583 "data_size": 63488 00:14:52.583 } 00:14:52.583 ] 00:14:52.583 }' 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.583 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.843 [2024-11-18 13:31:22.639061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.843 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.843 13:31:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:52.843 [2024-11-18 13:31:22.709380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:52.843 [2024-11-18 13:31:22.711442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.843 [2024-11-18 13:31:22.846170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:52.843 [2024-11-18 13:31:22.847721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.102 [2024-11-18 13:31:23.070899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.102 [2024-11-18 13:31:23.071332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.361 162.00 IOPS, 486.00 MiB/s [2024-11-18T13:31:23.415Z] [2024-11-18 13:31:23.400125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.361 [2024-11-18 13:31:23.400584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.620 [2024-11-18 13:31:23.522243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.620 [2024-11-18 13:31:23.522557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.879 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.879 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.879 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.879 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.880 "name": "raid_bdev1", 00:14:53.880 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:53.880 "strip_size_kb": 0, 00:14:53.880 "state": "online", 00:14:53.880 "raid_level": "raid1", 00:14:53.880 "superblock": true, 00:14:53.880 "num_base_bdevs": 4, 00:14:53.880 "num_base_bdevs_discovered": 4, 00:14:53.880 "num_base_bdevs_operational": 4, 00:14:53.880 "process": { 00:14:53.880 "type": "rebuild", 00:14:53.880 "target": "spare", 00:14:53.880 "progress": { 00:14:53.880 "blocks": 10240, 00:14:53.880 "percent": 16 00:14:53.880 } 00:14:53.880 }, 00:14:53.880 "base_bdevs_list": [ 00:14:53.880 { 00:14:53.880 "name": "spare", 00:14:53.880 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:53.880 "is_configured": true, 00:14:53.880 "data_offset": 2048, 00:14:53.880 "data_size": 63488 00:14:53.880 }, 00:14:53.880 { 00:14:53.880 "name": "BaseBdev2", 00:14:53.880 "uuid": "88ac5761-ca18-5b56-b498-5d08c269285f", 00:14:53.880 "is_configured": true, 00:14:53.880 "data_offset": 2048, 00:14:53.880 "data_size": 63488 00:14:53.880 }, 00:14:53.880 { 00:14:53.880 "name": "BaseBdev3", 00:14:53.880 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:53.880 "is_configured": true, 00:14:53.880 "data_offset": 2048, 00:14:53.880 "data_size": 63488 00:14:53.880 }, 00:14:53.880 { 00:14:53.880 "name": "BaseBdev4", 00:14:53.880 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:53.880 "is_configured": true, 00:14:53.880 "data_offset": 2048, 00:14:53.880 "data_size": 63488 00:14:53.880 } 00:14:53.880 ] 00:14:53.880 }' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:53.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.880 13:31:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.880 [2024-11-18 13:31:23.834881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.880 [2024-11-18 13:31:23.872263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:54.139 [2024-11-18 13:31:24.077592] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:54.139 [2024-11-18 13:31:24.077670] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:54.139 [2024-11-18 13:31:24.078284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.139 "name": "raid_bdev1", 00:14:54.139 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:54.139 "strip_size_kb": 0, 00:14:54.139 "state": "online", 00:14:54.139 "raid_level": "raid1", 00:14:54.139 "superblock": true, 00:14:54.139 "num_base_bdevs": 4, 00:14:54.139 "num_base_bdevs_discovered": 3, 00:14:54.139 "num_base_bdevs_operational": 3, 00:14:54.139 "process": { 00:14:54.139 "type": "rebuild", 00:14:54.139 "target": "spare", 00:14:54.139 "progress": { 00:14:54.139 "blocks": 14336, 00:14:54.139 "percent": 22 00:14:54.139 } 00:14:54.139 }, 00:14:54.139 "base_bdevs_list": [ 00:14:54.139 { 00:14:54.139 "name": "spare", 00:14:54.139 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:54.139 "is_configured": true, 00:14:54.139 "data_offset": 2048, 00:14:54.139 "data_size": 63488 00:14:54.139 }, 00:14:54.139 { 00:14:54.139 "name": null, 00:14:54.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.139 "is_configured": false, 00:14:54.139 "data_offset": 0, 00:14:54.139 "data_size": 63488 00:14:54.139 }, 00:14:54.139 { 00:14:54.139 "name": "BaseBdev3", 00:14:54.139 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:54.139 "is_configured": true, 00:14:54.139 "data_offset": 2048, 00:14:54.139 "data_size": 63488 00:14:54.139 }, 00:14:54.139 { 00:14:54.139 "name": "BaseBdev4", 00:14:54.139 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:54.139 "is_configured": true, 00:14:54.139 "data_offset": 2048, 00:14:54.139 "data_size": 63488 00:14:54.139 } 00:14:54.139 ] 00:14:54.139 }' 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.139 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.398 135.25 IOPS, 405.75 MiB/s [2024-11-18T13:31:24.452Z] 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=498 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.398 "name": "raid_bdev1", 00:14:54.398 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:54.398 "strip_size_kb": 0, 00:14:54.398 "state": "online", 00:14:54.398 "raid_level": "raid1", 00:14:54.398 "superblock": true, 00:14:54.398 "num_base_bdevs": 4, 00:14:54.398 "num_base_bdevs_discovered": 3, 00:14:54.398 "num_base_bdevs_operational": 3, 00:14:54.398 "process": { 00:14:54.398 "type": "rebuild", 00:14:54.398 "target": "spare", 00:14:54.398 "progress": { 00:14:54.398 "blocks": 14336, 00:14:54.398 "percent": 22 00:14:54.398 } 00:14:54.398 }, 00:14:54.398 "base_bdevs_list": [ 00:14:54.398 { 00:14:54.398 "name": "spare", 00:14:54.398 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:54.398 "is_configured": true, 00:14:54.398 "data_offset": 2048, 00:14:54.398 "data_size": 63488 00:14:54.398 }, 00:14:54.398 { 00:14:54.398 "name": null, 00:14:54.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.398 "is_configured": false, 00:14:54.398 "data_offset": 0, 00:14:54.398 "data_size": 63488 00:14:54.398 }, 00:14:54.398 { 00:14:54.398 "name": "BaseBdev3", 00:14:54.398 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:54.398 "is_configured": true, 00:14:54.398 "data_offset": 2048, 00:14:54.398 "data_size": 63488 00:14:54.398 }, 00:14:54.398 { 00:14:54.398 "name": "BaseBdev4", 00:14:54.398 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:54.398 "is_configured": true, 00:14:54.398 "data_offset": 2048, 00:14:54.398 "data_size": 63488 00:14:54.398 } 00:14:54.398 ] 00:14:54.398 }' 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.398 [2024-11-18 13:31:24.304428] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:54.398 [2024-11-18 13:31:24.304921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.398 13:31:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.656 [2024-11-18 13:31:24.637830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:54.915 [2024-11-18 13:31:24.756562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:55.175 [2024-11-18 13:31:25.076744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:55.442 119.20 IOPS, 357.60 MiB/s [2024-11-18T13:31:25.496Z] [2024-11-18 13:31:25.310898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.442 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.442 "name": "raid_bdev1", 00:14:55.442 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:55.442 "strip_size_kb": 0, 00:14:55.442 "state": "online", 00:14:55.442 "raid_level": "raid1", 00:14:55.442 "superblock": true, 00:14:55.442 "num_base_bdevs": 4, 00:14:55.442 "num_base_bdevs_discovered": 3, 00:14:55.442 "num_base_bdevs_operational": 3, 00:14:55.442 "process": { 00:14:55.442 "type": "rebuild", 00:14:55.442 "target": "spare", 00:14:55.442 "progress": { 00:14:55.442 "blocks": 32768, 00:14:55.442 "percent": 51 00:14:55.442 } 00:14:55.442 }, 00:14:55.442 "base_bdevs_list": [ 00:14:55.442 { 00:14:55.442 "name": "spare", 00:14:55.442 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:55.442 "is_configured": true, 00:14:55.442 "data_offset": 2048, 00:14:55.442 "data_size": 63488 00:14:55.442 }, 00:14:55.442 { 00:14:55.442 "name": null, 00:14:55.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.442 "is_configured": false, 00:14:55.442 "data_offset": 0, 00:14:55.442 "data_size": 63488 00:14:55.442 }, 00:14:55.442 { 00:14:55.442 "name": "BaseBdev3", 00:14:55.442 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:55.442 "is_configured": true, 00:14:55.442 "data_offset": 2048, 00:14:55.442 "data_size": 63488 00:14:55.442 }, 00:14:55.442 { 00:14:55.443 "name": "BaseBdev4", 00:14:55.443 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:55.443 "is_configured": true, 00:14:55.443 "data_offset": 2048, 00:14:55.443 "data_size": 63488 00:14:55.443 } 00:14:55.443 ] 00:14:55.443 }' 00:14:55.443 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.443 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.443 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.443 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.443 13:31:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.721 [2024-11-18 13:31:25.721885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:55.721 [2024-11-18 13:31:25.722833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:56.548 108.00 IOPS, 324.00 MiB/s [2024-11-18T13:31:26.602Z] 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.548 "name": "raid_bdev1", 00:14:56.548 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:56.548 "strip_size_kb": 0, 00:14:56.548 "state": "online", 00:14:56.548 "raid_level": "raid1", 00:14:56.548 "superblock": true, 00:14:56.548 "num_base_bdevs": 4, 00:14:56.548 "num_base_bdevs_discovered": 3, 00:14:56.548 "num_base_bdevs_operational": 3, 00:14:56.548 "process": { 00:14:56.548 "type": "rebuild", 00:14:56.548 "target": "spare", 00:14:56.548 "progress": { 00:14:56.548 "blocks": 49152, 00:14:56.548 "percent": 77 00:14:56.548 } 00:14:56.548 }, 00:14:56.548 "base_bdevs_list": [ 00:14:56.548 { 00:14:56.548 "name": "spare", 00:14:56.548 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 }, 00:14:56.548 { 00:14:56.548 "name": null, 00:14:56.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.548 "is_configured": false, 00:14:56.548 "data_offset": 0, 00:14:56.548 "data_size": 63488 00:14:56.548 }, 00:14:56.548 { 00:14:56.548 "name": "BaseBdev3", 00:14:56.548 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 }, 00:14:56.548 { 00:14:56.548 "name": "BaseBdev4", 00:14:56.548 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:56.548 "is_configured": true, 00:14:56.548 "data_offset": 2048, 00:14:56.548 "data_size": 63488 00:14:56.548 } 00:14:56.548 ] 00:14:56.548 }' 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.548 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.808 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.808 13:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.808 [2024-11-18 13:31:26.822206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:56.808 [2024-11-18 13:31:26.822694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:57.067 [2024-11-18 13:31:26.937607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:57.326 [2024-11-18 13:31:27.160105] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:57.326 98.57 IOPS, 295.71 MiB/s [2024-11-18T13:31:27.380Z] [2024-11-18 13:31:27.264870] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:57.326 [2024-11-18 13:31:27.268470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.896 "name": "raid_bdev1", 00:14:57.896 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:57.896 "strip_size_kb": 0, 00:14:57.896 "state": "online", 00:14:57.896 "raid_level": "raid1", 00:14:57.896 "superblock": true, 00:14:57.896 "num_base_bdevs": 4, 00:14:57.896 "num_base_bdevs_discovered": 3, 00:14:57.896 "num_base_bdevs_operational": 3, 00:14:57.896 "base_bdevs_list": [ 00:14:57.896 { 00:14:57.896 "name": "spare", 00:14:57.896 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": null, 00:14:57.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.896 "is_configured": false, 00:14:57.896 "data_offset": 0, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": "BaseBdev3", 00:14:57.896 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": "BaseBdev4", 00:14:57.896 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 } 00:14:57.896 ] 00:14:57.896 }' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.896 "name": "raid_bdev1", 00:14:57.896 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:57.896 "strip_size_kb": 0, 00:14:57.896 "state": "online", 00:14:57.896 "raid_level": "raid1", 00:14:57.896 "superblock": true, 00:14:57.896 "num_base_bdevs": 4, 00:14:57.896 "num_base_bdevs_discovered": 3, 00:14:57.896 "num_base_bdevs_operational": 3, 00:14:57.896 "base_bdevs_list": [ 00:14:57.896 { 00:14:57.896 "name": "spare", 00:14:57.896 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": null, 00:14:57.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.896 "is_configured": false, 00:14:57.896 "data_offset": 0, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": "BaseBdev3", 00:14:57.896 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 }, 00:14:57.896 { 00:14:57.896 "name": "BaseBdev4", 00:14:57.896 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:57.896 "is_configured": true, 00:14:57.896 "data_offset": 2048, 00:14:57.896 "data_size": 63488 00:14:57.896 } 00:14:57.896 ] 00:14:57.896 }' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.896 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.155 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.155 "name": "raid_bdev1", 00:14:58.155 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:14:58.155 "strip_size_kb": 0, 00:14:58.155 "state": "online", 00:14:58.155 "raid_level": "raid1", 00:14:58.155 "superblock": true, 00:14:58.155 "num_base_bdevs": 4, 00:14:58.155 "num_base_bdevs_discovered": 3, 00:14:58.155 "num_base_bdevs_operational": 3, 00:14:58.155 "base_bdevs_list": [ 00:14:58.155 { 00:14:58.155 "name": "spare", 00:14:58.155 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:14:58.155 "is_configured": true, 00:14:58.155 "data_offset": 2048, 00:14:58.155 "data_size": 63488 00:14:58.155 }, 00:14:58.155 { 00:14:58.155 "name": null, 00:14:58.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.155 "is_configured": false, 00:14:58.155 "data_offset": 0, 00:14:58.155 "data_size": 63488 00:14:58.155 }, 00:14:58.155 { 00:14:58.155 "name": "BaseBdev3", 00:14:58.155 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:14:58.155 "is_configured": true, 00:14:58.155 "data_offset": 2048, 00:14:58.155 "data_size": 63488 00:14:58.155 }, 00:14:58.155 { 00:14:58.155 "name": "BaseBdev4", 00:14:58.155 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:14:58.155 "is_configured": true, 00:14:58.155 "data_offset": 2048, 00:14:58.155 "data_size": 63488 00:14:58.155 } 00:14:58.155 ] 00:14:58.155 }' 00:14:58.155 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.155 13:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.414 90.62 IOPS, 271.88 MiB/s [2024-11-18T13:31:28.468Z] 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.414 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.414 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.414 [2024-11-18 13:31:28.370409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.414 [2024-11-18 13:31:28.370523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.414 00:14:58.414 Latency(us) 00:14:58.414 [2024-11-18T13:31:28.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.414 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:58.414 raid_bdev1 : 8.23 89.42 268.26 0.00 0.00 15595.45 287.97 109894.43 00:14:58.414 [2024-11-18T13:31:28.468Z] =================================================================================================================== 00:14:58.414 [2024-11-18T13:31:28.468Z] Total : 89.42 268.26 0.00 0.00 15595.45 287.97 109894.43 00:14:58.673 [2024-11-18 13:31:28.470900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.673 [2024-11-18 13:31:28.470985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.673 [2024-11-18 13:31:28.471098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.673 [2024-11-18 13:31:28.471166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.673 { 00:14:58.673 "results": [ 00:14:58.673 { 00:14:58.673 "job": "raid_bdev1", 00:14:58.673 "core_mask": "0x1", 00:14:58.673 "workload": "randrw", 00:14:58.673 "percentage": 50, 00:14:58.673 "status": "finished", 00:14:58.673 "queue_depth": 2, 00:14:58.673 "io_size": 3145728, 00:14:58.673 "runtime": 8.230679, 00:14:58.673 "iops": 89.42154104175367, 00:14:58.673 "mibps": 268.264623125261, 00:14:58.673 "io_failed": 0, 00:14:58.674 "io_timeout": 0, 00:14:58.674 "avg_latency_us": 15595.445149041201, 00:14:58.674 "min_latency_us": 287.97205240174674, 00:14:58.674 "max_latency_us": 109894.42794759825 00:14:58.674 } 00:14:58.674 ], 00:14:58.674 "core_count": 1 00:14:58.674 } 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:58.674 /dev/nbd0 00:14:58.674 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.934 1+0 records in 00:14:58.934 1+0 records out 00:14:58.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041646 s, 9.8 MB/s 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:58.934 /dev/nbd1 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:58.934 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.194 1+0 records in 00:14:59.194 1+0 records out 00:14:59.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478435 s, 8.6 MB/s 00:14:59.194 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.194 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.194 13:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.194 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.453 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.454 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:59.714 /dev/nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.714 1+0 records in 00:14:59.714 1+0 records out 00:14:59.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443242 s, 9.2 MB/s 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.714 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.974 13:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.233 [2024-11-18 13:31:30.152509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:00.233 [2024-11-18 13:31:30.152567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.233 [2024-11-18 13:31:30.152606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:00.233 [2024-11-18 13:31:30.152615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.233 [2024-11-18 13:31:30.154645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.233 [2024-11-18 13:31:30.154690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:00.233 [2024-11-18 13:31:30.154781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:00.233 [2024-11-18 13:31:30.154837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.233 [2024-11-18 13:31:30.154967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.233 [2024-11-18 13:31:30.155059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.233 spare 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.233 [2024-11-18 13:31:30.254974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:00.233 [2024-11-18 13:31:30.255003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:00.233 [2024-11-18 13:31:30.255302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:00.233 [2024-11-18 13:31:30.255487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:00.233 [2024-11-18 13:31:30.255509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:00.233 [2024-11-18 13:31:30.255683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.233 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.234 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.493 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.493 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.493 "name": "raid_bdev1", 00:15:00.493 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:00.493 "strip_size_kb": 0, 00:15:00.493 "state": "online", 00:15:00.493 "raid_level": "raid1", 00:15:00.493 "superblock": true, 00:15:00.493 "num_base_bdevs": 4, 00:15:00.493 "num_base_bdevs_discovered": 3, 00:15:00.493 "num_base_bdevs_operational": 3, 00:15:00.493 "base_bdevs_list": [ 00:15:00.493 { 00:15:00.493 "name": "spare", 00:15:00.493 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:15:00.493 "is_configured": true, 00:15:00.493 "data_offset": 2048, 00:15:00.493 "data_size": 63488 00:15:00.493 }, 00:15:00.493 { 00:15:00.493 "name": null, 00:15:00.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.493 "is_configured": false, 00:15:00.493 "data_offset": 2048, 00:15:00.493 "data_size": 63488 00:15:00.493 }, 00:15:00.493 { 00:15:00.493 "name": "BaseBdev3", 00:15:00.493 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:00.493 "is_configured": true, 00:15:00.493 "data_offset": 2048, 00:15:00.493 "data_size": 63488 00:15:00.493 }, 00:15:00.493 { 00:15:00.493 "name": "BaseBdev4", 00:15:00.493 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:00.493 "is_configured": true, 00:15:00.493 "data_offset": 2048, 00:15:00.493 "data_size": 63488 00:15:00.493 } 00:15:00.493 ] 00:15:00.493 }' 00:15:00.493 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.493 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.751 "name": "raid_bdev1", 00:15:00.751 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:00.751 "strip_size_kb": 0, 00:15:00.751 "state": "online", 00:15:00.751 "raid_level": "raid1", 00:15:00.751 "superblock": true, 00:15:00.751 "num_base_bdevs": 4, 00:15:00.751 "num_base_bdevs_discovered": 3, 00:15:00.751 "num_base_bdevs_operational": 3, 00:15:00.751 "base_bdevs_list": [ 00:15:00.751 { 00:15:00.751 "name": "spare", 00:15:00.751 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:15:00.751 "is_configured": true, 00:15:00.751 "data_offset": 2048, 00:15:00.751 "data_size": 63488 00:15:00.751 }, 00:15:00.751 { 00:15:00.751 "name": null, 00:15:00.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.751 "is_configured": false, 00:15:00.751 "data_offset": 2048, 00:15:00.751 "data_size": 63488 00:15:00.751 }, 00:15:00.751 { 00:15:00.751 "name": "BaseBdev3", 00:15:00.751 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:00.751 "is_configured": true, 00:15:00.751 "data_offset": 2048, 00:15:00.751 "data_size": 63488 00:15:00.751 }, 00:15:00.751 { 00:15:00.751 "name": "BaseBdev4", 00:15:00.751 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:00.751 "is_configured": true, 00:15:00.751 "data_offset": 2048, 00:15:00.751 "data_size": 63488 00:15:00.751 } 00:15:00.751 ] 00:15:00.751 }' 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.751 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.010 [2024-11-18 13:31:30.883416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.010 "name": "raid_bdev1", 00:15:01.010 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:01.010 "strip_size_kb": 0, 00:15:01.010 "state": "online", 00:15:01.010 "raid_level": "raid1", 00:15:01.010 "superblock": true, 00:15:01.010 "num_base_bdevs": 4, 00:15:01.010 "num_base_bdevs_discovered": 2, 00:15:01.010 "num_base_bdevs_operational": 2, 00:15:01.010 "base_bdevs_list": [ 00:15:01.010 { 00:15:01.010 "name": null, 00:15:01.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.010 "is_configured": false, 00:15:01.010 "data_offset": 0, 00:15:01.010 "data_size": 63488 00:15:01.010 }, 00:15:01.010 { 00:15:01.010 "name": null, 00:15:01.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.010 "is_configured": false, 00:15:01.010 "data_offset": 2048, 00:15:01.010 "data_size": 63488 00:15:01.010 }, 00:15:01.010 { 00:15:01.010 "name": "BaseBdev3", 00:15:01.010 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:01.010 "is_configured": true, 00:15:01.010 "data_offset": 2048, 00:15:01.010 "data_size": 63488 00:15:01.010 }, 00:15:01.010 { 00:15:01.010 "name": "BaseBdev4", 00:15:01.010 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:01.010 "is_configured": true, 00:15:01.010 "data_offset": 2048, 00:15:01.010 "data_size": 63488 00:15:01.010 } 00:15:01.010 ] 00:15:01.010 }' 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.010 13:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.576 13:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.576 13:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.576 13:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.576 [2024-11-18 13:31:31.350817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.576 [2024-11-18 13:31:31.351051] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:01.576 [2024-11-18 13:31:31.351115] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:01.576 [2024-11-18 13:31:31.351189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.576 [2024-11-18 13:31:31.365201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:01.576 13:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.576 13:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:01.576 [2024-11-18 13:31:31.366997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.516 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.516 "name": "raid_bdev1", 00:15:02.516 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:02.516 "strip_size_kb": 0, 00:15:02.516 "state": "online", 00:15:02.516 "raid_level": "raid1", 00:15:02.516 "superblock": true, 00:15:02.516 "num_base_bdevs": 4, 00:15:02.516 "num_base_bdevs_discovered": 3, 00:15:02.516 "num_base_bdevs_operational": 3, 00:15:02.516 "process": { 00:15:02.516 "type": "rebuild", 00:15:02.516 "target": "spare", 00:15:02.516 "progress": { 00:15:02.516 "blocks": 20480, 00:15:02.516 "percent": 32 00:15:02.516 } 00:15:02.516 }, 00:15:02.516 "base_bdevs_list": [ 00:15:02.516 { 00:15:02.516 "name": "spare", 00:15:02.516 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:15:02.516 "is_configured": true, 00:15:02.516 "data_offset": 2048, 00:15:02.516 "data_size": 63488 00:15:02.516 }, 00:15:02.516 { 00:15:02.516 "name": null, 00:15:02.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.516 "is_configured": false, 00:15:02.516 "data_offset": 2048, 00:15:02.516 "data_size": 63488 00:15:02.516 }, 00:15:02.516 { 00:15:02.516 "name": "BaseBdev3", 00:15:02.516 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:02.516 "is_configured": true, 00:15:02.516 "data_offset": 2048, 00:15:02.516 "data_size": 63488 00:15:02.516 }, 00:15:02.516 { 00:15:02.516 "name": "BaseBdev4", 00:15:02.516 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:02.516 "is_configured": true, 00:15:02.516 "data_offset": 2048, 00:15:02.516 "data_size": 63488 00:15:02.516 } 00:15:02.516 ] 00:15:02.516 }' 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.517 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.517 [2024-11-18 13:31:32.534882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.776 [2024-11-18 13:31:32.571776] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:02.776 [2024-11-18 13:31:32.571884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.776 [2024-11-18 13:31:32.571925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:02.776 [2024-11-18 13:31:32.571946] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.776 "name": "raid_bdev1", 00:15:02.776 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:02.776 "strip_size_kb": 0, 00:15:02.776 "state": "online", 00:15:02.776 "raid_level": "raid1", 00:15:02.776 "superblock": true, 00:15:02.776 "num_base_bdevs": 4, 00:15:02.776 "num_base_bdevs_discovered": 2, 00:15:02.776 "num_base_bdevs_operational": 2, 00:15:02.776 "base_bdevs_list": [ 00:15:02.776 { 00:15:02.776 "name": null, 00:15:02.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.776 "is_configured": false, 00:15:02.776 "data_offset": 0, 00:15:02.776 "data_size": 63488 00:15:02.776 }, 00:15:02.776 { 00:15:02.776 "name": null, 00:15:02.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.776 "is_configured": false, 00:15:02.776 "data_offset": 2048, 00:15:02.776 "data_size": 63488 00:15:02.776 }, 00:15:02.776 { 00:15:02.776 "name": "BaseBdev3", 00:15:02.776 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:02.776 "is_configured": true, 00:15:02.776 "data_offset": 2048, 00:15:02.776 "data_size": 63488 00:15:02.776 }, 00:15:02.776 { 00:15:02.776 "name": "BaseBdev4", 00:15:02.776 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:02.776 "is_configured": true, 00:15:02.776 "data_offset": 2048, 00:15:02.776 "data_size": 63488 00:15:02.776 } 00:15:02.776 ] 00:15:02.776 }' 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.776 13:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.036 13:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:03.036 13:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.036 13:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.036 [2024-11-18 13:31:33.035139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:03.036 [2024-11-18 13:31:33.035245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.036 [2024-11-18 13:31:33.035292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:03.036 [2024-11-18 13:31:33.035320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.036 [2024-11-18 13:31:33.035799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.036 [2024-11-18 13:31:33.035858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.036 [2024-11-18 13:31:33.035967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:03.036 [2024-11-18 13:31:33.036016] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:03.036 [2024-11-18 13:31:33.036057] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:03.036 [2024-11-18 13:31:33.036115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.036 [2024-11-18 13:31:33.049419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:03.036 spare 00:15:03.036 13:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.036 [2024-11-18 13:31:33.051240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.036 13:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.415 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.415 "name": "raid_bdev1", 00:15:04.415 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:04.416 "strip_size_kb": 0, 00:15:04.416 "state": "online", 00:15:04.416 "raid_level": "raid1", 00:15:04.416 "superblock": true, 00:15:04.416 "num_base_bdevs": 4, 00:15:04.416 "num_base_bdevs_discovered": 3, 00:15:04.416 "num_base_bdevs_operational": 3, 00:15:04.416 "process": { 00:15:04.416 "type": "rebuild", 00:15:04.416 "target": "spare", 00:15:04.416 "progress": { 00:15:04.416 "blocks": 20480, 00:15:04.416 "percent": 32 00:15:04.416 } 00:15:04.416 }, 00:15:04.416 "base_bdevs_list": [ 00:15:04.416 { 00:15:04.416 "name": "spare", 00:15:04.416 "uuid": "0743b2a5-c12e-56b1-871e-94e571ab07c1", 00:15:04.416 "is_configured": true, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": null, 00:15:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.416 "is_configured": false, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": "BaseBdev3", 00:15:04.416 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:04.416 "is_configured": true, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": "BaseBdev4", 00:15:04.416 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:04.416 "is_configured": true, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 } 00:15:04.416 ] 00:15:04.416 }' 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.416 [2024-11-18 13:31:34.211547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.416 [2024-11-18 13:31:34.255900] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.416 [2024-11-18 13:31:34.255973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.416 [2024-11-18 13:31:34.255988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.416 [2024-11-18 13:31:34.255997] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.416 "name": "raid_bdev1", 00:15:04.416 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:04.416 "strip_size_kb": 0, 00:15:04.416 "state": "online", 00:15:04.416 "raid_level": "raid1", 00:15:04.416 "superblock": true, 00:15:04.416 "num_base_bdevs": 4, 00:15:04.416 "num_base_bdevs_discovered": 2, 00:15:04.416 "num_base_bdevs_operational": 2, 00:15:04.416 "base_bdevs_list": [ 00:15:04.416 { 00:15:04.416 "name": null, 00:15:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.416 "is_configured": false, 00:15:04.416 "data_offset": 0, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": null, 00:15:04.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.416 "is_configured": false, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": "BaseBdev3", 00:15:04.416 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:04.416 "is_configured": true, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 }, 00:15:04.416 { 00:15:04.416 "name": "BaseBdev4", 00:15:04.416 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:04.416 "is_configured": true, 00:15:04.416 "data_offset": 2048, 00:15:04.416 "data_size": 63488 00:15:04.416 } 00:15:04.416 ] 00:15:04.416 }' 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.416 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.985 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.986 "name": "raid_bdev1", 00:15:04.986 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:04.986 "strip_size_kb": 0, 00:15:04.986 "state": "online", 00:15:04.986 "raid_level": "raid1", 00:15:04.986 "superblock": true, 00:15:04.986 "num_base_bdevs": 4, 00:15:04.986 "num_base_bdevs_discovered": 2, 00:15:04.986 "num_base_bdevs_operational": 2, 00:15:04.986 "base_bdevs_list": [ 00:15:04.986 { 00:15:04.986 "name": null, 00:15:04.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.986 "is_configured": false, 00:15:04.986 "data_offset": 0, 00:15:04.986 "data_size": 63488 00:15:04.986 }, 00:15:04.986 { 00:15:04.986 "name": null, 00:15:04.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.986 "is_configured": false, 00:15:04.986 "data_offset": 2048, 00:15:04.986 "data_size": 63488 00:15:04.986 }, 00:15:04.986 { 00:15:04.986 "name": "BaseBdev3", 00:15:04.986 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:04.986 "is_configured": true, 00:15:04.986 "data_offset": 2048, 00:15:04.986 "data_size": 63488 00:15:04.986 }, 00:15:04.986 { 00:15:04.986 "name": "BaseBdev4", 00:15:04.986 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:04.986 "is_configured": true, 00:15:04.986 "data_offset": 2048, 00:15:04.986 "data_size": 63488 00:15:04.986 } 00:15:04.986 ] 00:15:04.986 }' 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.986 [2024-11-18 13:31:34.910541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.986 [2024-11-18 13:31:34.910641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.986 [2024-11-18 13:31:34.910717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:04.986 [2024-11-18 13:31:34.910758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.986 [2024-11-18 13:31:34.911224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.986 [2024-11-18 13:31:34.911283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.986 [2024-11-18 13:31:34.911383] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:04.986 [2024-11-18 13:31:34.911426] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:04.986 [2024-11-18 13:31:34.911467] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:04.986 [2024-11-18 13:31:34.911500] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:04.986 BaseBdev1 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.986 13:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.924 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.924 "name": "raid_bdev1", 00:15:05.924 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:05.924 "strip_size_kb": 0, 00:15:05.924 "state": "online", 00:15:05.924 "raid_level": "raid1", 00:15:05.924 "superblock": true, 00:15:05.924 "num_base_bdevs": 4, 00:15:05.924 "num_base_bdevs_discovered": 2, 00:15:05.924 "num_base_bdevs_operational": 2, 00:15:05.924 "base_bdevs_list": [ 00:15:05.924 { 00:15:05.924 "name": null, 00:15:05.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.924 "is_configured": false, 00:15:05.924 "data_offset": 0, 00:15:05.924 "data_size": 63488 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": null, 00:15:05.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.924 "is_configured": false, 00:15:05.924 "data_offset": 2048, 00:15:05.924 "data_size": 63488 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": "BaseBdev3", 00:15:05.924 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:05.924 "is_configured": true, 00:15:05.924 "data_offset": 2048, 00:15:05.924 "data_size": 63488 00:15:05.924 }, 00:15:05.924 { 00:15:05.924 "name": "BaseBdev4", 00:15:05.924 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:05.925 "is_configured": true, 00:15:05.925 "data_offset": 2048, 00:15:05.925 "data_size": 63488 00:15:05.925 } 00:15:05.925 ] 00:15:05.925 }' 00:15:05.925 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.925 13:31:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.494 "name": "raid_bdev1", 00:15:06.494 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:06.494 "strip_size_kb": 0, 00:15:06.494 "state": "online", 00:15:06.494 "raid_level": "raid1", 00:15:06.494 "superblock": true, 00:15:06.494 "num_base_bdevs": 4, 00:15:06.494 "num_base_bdevs_discovered": 2, 00:15:06.494 "num_base_bdevs_operational": 2, 00:15:06.494 "base_bdevs_list": [ 00:15:06.494 { 00:15:06.494 "name": null, 00:15:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.494 "is_configured": false, 00:15:06.494 "data_offset": 0, 00:15:06.494 "data_size": 63488 00:15:06.494 }, 00:15:06.494 { 00:15:06.494 "name": null, 00:15:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.494 "is_configured": false, 00:15:06.494 "data_offset": 2048, 00:15:06.494 "data_size": 63488 00:15:06.494 }, 00:15:06.494 { 00:15:06.494 "name": "BaseBdev3", 00:15:06.494 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:06.494 "is_configured": true, 00:15:06.494 "data_offset": 2048, 00:15:06.494 "data_size": 63488 00:15:06.494 }, 00:15:06.494 { 00:15:06.494 "name": "BaseBdev4", 00:15:06.494 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:06.494 "is_configured": true, 00:15:06.494 "data_offset": 2048, 00:15:06.494 "data_size": 63488 00:15:06.494 } 00:15:06.494 ] 00:15:06.494 }' 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.494 [2024-11-18 13:31:36.499984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.494 [2024-11-18 13:31:36.500172] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:06.494 [2024-11-18 13:31:36.500223] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:06.494 request: 00:15:06.494 { 00:15:06.494 "base_bdev": "BaseBdev1", 00:15:06.494 "raid_bdev": "raid_bdev1", 00:15:06.494 "method": "bdev_raid_add_base_bdev", 00:15:06.494 "req_id": 1 00:15:06.494 } 00:15:06.494 Got JSON-RPC error response 00:15:06.494 response: 00:15:06.494 { 00:15:06.494 "code": -22, 00:15:06.494 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:06.494 } 00:15:06.494 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:06.495 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:06.495 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.495 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.495 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.495 13:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.911 "name": "raid_bdev1", 00:15:07.911 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:07.911 "strip_size_kb": 0, 00:15:07.911 "state": "online", 00:15:07.911 "raid_level": "raid1", 00:15:07.911 "superblock": true, 00:15:07.911 "num_base_bdevs": 4, 00:15:07.911 "num_base_bdevs_discovered": 2, 00:15:07.911 "num_base_bdevs_operational": 2, 00:15:07.911 "base_bdevs_list": [ 00:15:07.911 { 00:15:07.911 "name": null, 00:15:07.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.911 "is_configured": false, 00:15:07.911 "data_offset": 0, 00:15:07.911 "data_size": 63488 00:15:07.911 }, 00:15:07.911 { 00:15:07.911 "name": null, 00:15:07.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.911 "is_configured": false, 00:15:07.911 "data_offset": 2048, 00:15:07.911 "data_size": 63488 00:15:07.911 }, 00:15:07.911 { 00:15:07.911 "name": "BaseBdev3", 00:15:07.911 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:07.911 "is_configured": true, 00:15:07.911 "data_offset": 2048, 00:15:07.911 "data_size": 63488 00:15:07.911 }, 00:15:07.911 { 00:15:07.911 "name": "BaseBdev4", 00:15:07.911 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:07.911 "is_configured": true, 00:15:07.911 "data_offset": 2048, 00:15:07.911 "data_size": 63488 00:15:07.911 } 00:15:07.911 ] 00:15:07.911 }' 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.911 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.170 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.170 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.170 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.170 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.170 13:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.171 "name": "raid_bdev1", 00:15:08.171 "uuid": "fcfa4d2a-2661-45b5-8888-a54efa2a27db", 00:15:08.171 "strip_size_kb": 0, 00:15:08.171 "state": "online", 00:15:08.171 "raid_level": "raid1", 00:15:08.171 "superblock": true, 00:15:08.171 "num_base_bdevs": 4, 00:15:08.171 "num_base_bdevs_discovered": 2, 00:15:08.171 "num_base_bdevs_operational": 2, 00:15:08.171 "base_bdevs_list": [ 00:15:08.171 { 00:15:08.171 "name": null, 00:15:08.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.171 "is_configured": false, 00:15:08.171 "data_offset": 0, 00:15:08.171 "data_size": 63488 00:15:08.171 }, 00:15:08.171 { 00:15:08.171 "name": null, 00:15:08.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.171 "is_configured": false, 00:15:08.171 "data_offset": 2048, 00:15:08.171 "data_size": 63488 00:15:08.171 }, 00:15:08.171 { 00:15:08.171 "name": "BaseBdev3", 00:15:08.171 "uuid": "455f58c7-daa3-57a3-95e8-b41cbfa1c232", 00:15:08.171 "is_configured": true, 00:15:08.171 "data_offset": 2048, 00:15:08.171 "data_size": 63488 00:15:08.171 }, 00:15:08.171 { 00:15:08.171 "name": "BaseBdev4", 00:15:08.171 "uuid": "cb80fc1b-bc3c-5890-a741-2f6b5ef626f0", 00:15:08.171 "is_configured": true, 00:15:08.171 "data_offset": 2048, 00:15:08.171 "data_size": 63488 00:15:08.171 } 00:15:08.171 ] 00:15:08.171 }' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79114 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79114 ']' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79114 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79114 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79114' 00:15:08.171 killing process with pid 79114 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79114 00:15:08.171 Received shutdown signal, test time was about 17.921317 seconds 00:15:08.171 00:15:08.171 Latency(us) 00:15:08.171 [2024-11-18T13:31:38.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.171 [2024-11-18T13:31:38.225Z] =================================================================================================================== 00:15:08.171 [2024-11-18T13:31:38.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.171 [2024-11-18 13:31:38.122195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.171 [2024-11-18 13:31:38.122301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.171 [2024-11-18 13:31:38.122365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.171 [2024-11-18 13:31:38.122375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:08.171 13:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79114 00:15:08.739 [2024-11-18 13:31:38.518925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.676 13:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.676 00:15:09.676 real 0m21.273s 00:15:09.676 user 0m27.760s 00:15:09.676 sys 0m2.562s 00:15:09.676 ************************************ 00:15:09.676 END TEST raid_rebuild_test_sb_io 00:15:09.676 ************************************ 00:15:09.676 13:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.676 13:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.676 13:31:39 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:09.676 13:31:39 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:09.677 13:31:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:09.677 13:31:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.677 13:31:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.677 ************************************ 00:15:09.677 START TEST raid5f_state_function_test 00:15:09.677 ************************************ 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:09.677 Process raid pid: 79837 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79837 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79837' 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79837 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79837 ']' 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.677 13:31:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.936 [2024-11-18 13:31:39.783183] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:09.936 [2024-11-18 13:31:39.783373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.936 [2024-11-18 13:31:39.958977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.196 [2024-11-18 13:31:40.069818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.455 [2024-11-18 13:31:40.268362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.455 [2024-11-18 13:31:40.268473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.724 [2024-11-18 13:31:40.615095] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.724 [2024-11-18 13:31:40.615232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.724 [2024-11-18 13:31:40.615261] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.724 [2024-11-18 13:31:40.615284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.724 [2024-11-18 13:31:40.615302] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:10.724 [2024-11-18 13:31:40.615322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.724 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.724 "name": "Existed_Raid", 00:15:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.724 "strip_size_kb": 64, 00:15:10.724 "state": "configuring", 00:15:10.724 "raid_level": "raid5f", 00:15:10.724 "superblock": false, 00:15:10.724 "num_base_bdevs": 3, 00:15:10.724 "num_base_bdevs_discovered": 0, 00:15:10.724 "num_base_bdevs_operational": 3, 00:15:10.724 "base_bdevs_list": [ 00:15:10.724 { 00:15:10.724 "name": "BaseBdev1", 00:15:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.724 "is_configured": false, 00:15:10.724 "data_offset": 0, 00:15:10.724 "data_size": 0 00:15:10.724 }, 00:15:10.724 { 00:15:10.725 "name": "BaseBdev2", 00:15:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.725 "is_configured": false, 00:15:10.725 "data_offset": 0, 00:15:10.725 "data_size": 0 00:15:10.725 }, 00:15:10.725 { 00:15:10.725 "name": "BaseBdev3", 00:15:10.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.725 "is_configured": false, 00:15:10.725 "data_offset": 0, 00:15:10.725 "data_size": 0 00:15:10.725 } 00:15:10.725 ] 00:15:10.725 }' 00:15:10.725 13:31:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.725 13:31:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.303 [2024-11-18 13:31:41.066310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.303 [2024-11-18 13:31:41.066395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.303 [2024-11-18 13:31:41.078297] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.303 [2024-11-18 13:31:41.078384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.303 [2024-11-18 13:31:41.078411] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.303 [2024-11-18 13:31:41.078433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.303 [2024-11-18 13:31:41.078451] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.303 [2024-11-18 13:31:41.078471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.303 [2024-11-18 13:31:41.126854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.303 BaseBdev1 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.303 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.304 [ 00:15:11.304 { 00:15:11.304 "name": "BaseBdev1", 00:15:11.304 "aliases": [ 00:15:11.304 "d0b2b2fc-19f5-48c7-a0f6-4599d962603f" 00:15:11.304 ], 00:15:11.304 "product_name": "Malloc disk", 00:15:11.304 "block_size": 512, 00:15:11.304 "num_blocks": 65536, 00:15:11.304 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:11.304 "assigned_rate_limits": { 00:15:11.304 "rw_ios_per_sec": 0, 00:15:11.304 "rw_mbytes_per_sec": 0, 00:15:11.304 "r_mbytes_per_sec": 0, 00:15:11.304 "w_mbytes_per_sec": 0 00:15:11.304 }, 00:15:11.304 "claimed": true, 00:15:11.304 "claim_type": "exclusive_write", 00:15:11.304 "zoned": false, 00:15:11.304 "supported_io_types": { 00:15:11.304 "read": true, 00:15:11.304 "write": true, 00:15:11.304 "unmap": true, 00:15:11.304 "flush": true, 00:15:11.304 "reset": true, 00:15:11.304 "nvme_admin": false, 00:15:11.304 "nvme_io": false, 00:15:11.304 "nvme_io_md": false, 00:15:11.304 "write_zeroes": true, 00:15:11.304 "zcopy": true, 00:15:11.304 "get_zone_info": false, 00:15:11.304 "zone_management": false, 00:15:11.304 "zone_append": false, 00:15:11.304 "compare": false, 00:15:11.304 "compare_and_write": false, 00:15:11.304 "abort": true, 00:15:11.304 "seek_hole": false, 00:15:11.304 "seek_data": false, 00:15:11.304 "copy": true, 00:15:11.304 "nvme_iov_md": false 00:15:11.304 }, 00:15:11.304 "memory_domains": [ 00:15:11.304 { 00:15:11.304 "dma_device_id": "system", 00:15:11.304 "dma_device_type": 1 00:15:11.304 }, 00:15:11.304 { 00:15:11.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.304 "dma_device_type": 2 00:15:11.304 } 00:15:11.304 ], 00:15:11.304 "driver_specific": {} 00:15:11.304 } 00:15:11.304 ] 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.304 "name": "Existed_Raid", 00:15:11.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.304 "strip_size_kb": 64, 00:15:11.304 "state": "configuring", 00:15:11.304 "raid_level": "raid5f", 00:15:11.304 "superblock": false, 00:15:11.304 "num_base_bdevs": 3, 00:15:11.304 "num_base_bdevs_discovered": 1, 00:15:11.304 "num_base_bdevs_operational": 3, 00:15:11.304 "base_bdevs_list": [ 00:15:11.304 { 00:15:11.304 "name": "BaseBdev1", 00:15:11.304 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:11.304 "is_configured": true, 00:15:11.304 "data_offset": 0, 00:15:11.304 "data_size": 65536 00:15:11.304 }, 00:15:11.304 { 00:15:11.304 "name": "BaseBdev2", 00:15:11.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.304 "is_configured": false, 00:15:11.304 "data_offset": 0, 00:15:11.304 "data_size": 0 00:15:11.304 }, 00:15:11.304 { 00:15:11.304 "name": "BaseBdev3", 00:15:11.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.304 "is_configured": false, 00:15:11.304 "data_offset": 0, 00:15:11.304 "data_size": 0 00:15:11.304 } 00:15:11.304 ] 00:15:11.304 }' 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.304 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.872 [2024-11-18 13:31:41.622295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.872 [2024-11-18 13:31:41.622379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.872 [2024-11-18 13:31:41.634325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.872 [2024-11-18 13:31:41.636036] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.872 [2024-11-18 13:31:41.636109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.872 [2024-11-18 13:31:41.636149] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.872 [2024-11-18 13:31:41.636171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.872 "name": "Existed_Raid", 00:15:11.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.872 "strip_size_kb": 64, 00:15:11.872 "state": "configuring", 00:15:11.872 "raid_level": "raid5f", 00:15:11.872 "superblock": false, 00:15:11.872 "num_base_bdevs": 3, 00:15:11.872 "num_base_bdevs_discovered": 1, 00:15:11.872 "num_base_bdevs_operational": 3, 00:15:11.872 "base_bdevs_list": [ 00:15:11.872 { 00:15:11.872 "name": "BaseBdev1", 00:15:11.872 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:11.872 "is_configured": true, 00:15:11.872 "data_offset": 0, 00:15:11.872 "data_size": 65536 00:15:11.872 }, 00:15:11.872 { 00:15:11.872 "name": "BaseBdev2", 00:15:11.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.872 "is_configured": false, 00:15:11.872 "data_offset": 0, 00:15:11.872 "data_size": 0 00:15:11.872 }, 00:15:11.872 { 00:15:11.872 "name": "BaseBdev3", 00:15:11.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.872 "is_configured": false, 00:15:11.872 "data_offset": 0, 00:15:11.872 "data_size": 0 00:15:11.872 } 00:15:11.872 ] 00:15:11.872 }' 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.872 13:31:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 [2024-11-18 13:31:42.131026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.132 BaseBdev2 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.132 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.132 [ 00:15:12.132 { 00:15:12.132 "name": "BaseBdev2", 00:15:12.132 "aliases": [ 00:15:12.132 "2bdd385d-016d-443c-a357-f16a674b1324" 00:15:12.132 ], 00:15:12.132 "product_name": "Malloc disk", 00:15:12.132 "block_size": 512, 00:15:12.132 "num_blocks": 65536, 00:15:12.132 "uuid": "2bdd385d-016d-443c-a357-f16a674b1324", 00:15:12.132 "assigned_rate_limits": { 00:15:12.132 "rw_ios_per_sec": 0, 00:15:12.132 "rw_mbytes_per_sec": 0, 00:15:12.132 "r_mbytes_per_sec": 0, 00:15:12.132 "w_mbytes_per_sec": 0 00:15:12.132 }, 00:15:12.132 "claimed": true, 00:15:12.132 "claim_type": "exclusive_write", 00:15:12.132 "zoned": false, 00:15:12.132 "supported_io_types": { 00:15:12.132 "read": true, 00:15:12.132 "write": true, 00:15:12.132 "unmap": true, 00:15:12.132 "flush": true, 00:15:12.132 "reset": true, 00:15:12.132 "nvme_admin": false, 00:15:12.132 "nvme_io": false, 00:15:12.132 "nvme_io_md": false, 00:15:12.132 "write_zeroes": true, 00:15:12.132 "zcopy": true, 00:15:12.132 "get_zone_info": false, 00:15:12.132 "zone_management": false, 00:15:12.132 "zone_append": false, 00:15:12.132 "compare": false, 00:15:12.132 "compare_and_write": false, 00:15:12.132 "abort": true, 00:15:12.132 "seek_hole": false, 00:15:12.132 "seek_data": false, 00:15:12.132 "copy": true, 00:15:12.132 "nvme_iov_md": false 00:15:12.132 }, 00:15:12.132 "memory_domains": [ 00:15:12.132 { 00:15:12.132 "dma_device_id": "system", 00:15:12.133 "dma_device_type": 1 00:15:12.133 }, 00:15:12.133 { 00:15:12.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.133 "dma_device_type": 2 00:15:12.133 } 00:15:12.133 ], 00:15:12.133 "driver_specific": {} 00:15:12.133 } 00:15:12.133 ] 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.133 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.393 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.393 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.393 "name": "Existed_Raid", 00:15:12.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.393 "strip_size_kb": 64, 00:15:12.393 "state": "configuring", 00:15:12.393 "raid_level": "raid5f", 00:15:12.393 "superblock": false, 00:15:12.393 "num_base_bdevs": 3, 00:15:12.393 "num_base_bdevs_discovered": 2, 00:15:12.393 "num_base_bdevs_operational": 3, 00:15:12.393 "base_bdevs_list": [ 00:15:12.393 { 00:15:12.393 "name": "BaseBdev1", 00:15:12.393 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:12.393 "is_configured": true, 00:15:12.393 "data_offset": 0, 00:15:12.393 "data_size": 65536 00:15:12.393 }, 00:15:12.393 { 00:15:12.393 "name": "BaseBdev2", 00:15:12.393 "uuid": "2bdd385d-016d-443c-a357-f16a674b1324", 00:15:12.393 "is_configured": true, 00:15:12.393 "data_offset": 0, 00:15:12.393 "data_size": 65536 00:15:12.393 }, 00:15:12.393 { 00:15:12.393 "name": "BaseBdev3", 00:15:12.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.393 "is_configured": false, 00:15:12.393 "data_offset": 0, 00:15:12.393 "data_size": 0 00:15:12.393 } 00:15:12.393 ] 00:15:12.393 }' 00:15:12.393 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.393 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 [2024-11-18 13:31:42.638967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.653 [2024-11-18 13:31:42.639025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.653 [2024-11-18 13:31:42.639036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:12.653 [2024-11-18 13:31:42.639326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:12.653 [2024-11-18 13:31:42.644868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.653 [2024-11-18 13:31:42.644890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:12.653 [2024-11-18 13:31:42.645149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.653 BaseBdev3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.653 [ 00:15:12.653 { 00:15:12.653 "name": "BaseBdev3", 00:15:12.653 "aliases": [ 00:15:12.653 "bba1c6c7-d2e5-4c67-a5d2-7271b8ff4c14" 00:15:12.653 ], 00:15:12.653 "product_name": "Malloc disk", 00:15:12.653 "block_size": 512, 00:15:12.653 "num_blocks": 65536, 00:15:12.653 "uuid": "bba1c6c7-d2e5-4c67-a5d2-7271b8ff4c14", 00:15:12.653 "assigned_rate_limits": { 00:15:12.653 "rw_ios_per_sec": 0, 00:15:12.653 "rw_mbytes_per_sec": 0, 00:15:12.653 "r_mbytes_per_sec": 0, 00:15:12.653 "w_mbytes_per_sec": 0 00:15:12.653 }, 00:15:12.653 "claimed": true, 00:15:12.653 "claim_type": "exclusive_write", 00:15:12.653 "zoned": false, 00:15:12.653 "supported_io_types": { 00:15:12.653 "read": true, 00:15:12.653 "write": true, 00:15:12.653 "unmap": true, 00:15:12.653 "flush": true, 00:15:12.653 "reset": true, 00:15:12.653 "nvme_admin": false, 00:15:12.653 "nvme_io": false, 00:15:12.653 "nvme_io_md": false, 00:15:12.653 "write_zeroes": true, 00:15:12.653 "zcopy": true, 00:15:12.653 "get_zone_info": false, 00:15:12.653 "zone_management": false, 00:15:12.653 "zone_append": false, 00:15:12.653 "compare": false, 00:15:12.653 "compare_and_write": false, 00:15:12.653 "abort": true, 00:15:12.653 "seek_hole": false, 00:15:12.653 "seek_data": false, 00:15:12.653 "copy": true, 00:15:12.653 "nvme_iov_md": false 00:15:12.653 }, 00:15:12.653 "memory_domains": [ 00:15:12.653 { 00:15:12.653 "dma_device_id": "system", 00:15:12.653 "dma_device_type": 1 00:15:12.653 }, 00:15:12.653 { 00:15:12.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.653 "dma_device_type": 2 00:15:12.653 } 00:15:12.653 ], 00:15:12.653 "driver_specific": {} 00:15:12.653 } 00:15:12.653 ] 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.653 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.913 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.913 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.913 "name": "Existed_Raid", 00:15:12.913 "uuid": "bfa3c42f-6364-4b9d-ade1-2ff90a36241d", 00:15:12.913 "strip_size_kb": 64, 00:15:12.913 "state": "online", 00:15:12.913 "raid_level": "raid5f", 00:15:12.913 "superblock": false, 00:15:12.913 "num_base_bdevs": 3, 00:15:12.913 "num_base_bdevs_discovered": 3, 00:15:12.913 "num_base_bdevs_operational": 3, 00:15:12.913 "base_bdevs_list": [ 00:15:12.913 { 00:15:12.913 "name": "BaseBdev1", 00:15:12.913 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:12.913 "is_configured": true, 00:15:12.913 "data_offset": 0, 00:15:12.913 "data_size": 65536 00:15:12.913 }, 00:15:12.913 { 00:15:12.913 "name": "BaseBdev2", 00:15:12.913 "uuid": "2bdd385d-016d-443c-a357-f16a674b1324", 00:15:12.913 "is_configured": true, 00:15:12.913 "data_offset": 0, 00:15:12.913 "data_size": 65536 00:15:12.913 }, 00:15:12.913 { 00:15:12.913 "name": "BaseBdev3", 00:15:12.913 "uuid": "bba1c6c7-d2e5-4c67-a5d2-7271b8ff4c14", 00:15:12.913 "is_configured": true, 00:15:12.913 "data_offset": 0, 00:15:12.913 "data_size": 65536 00:15:12.913 } 00:15:12.913 ] 00:15:12.913 }' 00:15:12.913 13:31:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.913 13:31:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.176 [2024-11-18 13:31:43.158451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.176 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.176 "name": "Existed_Raid", 00:15:13.176 "aliases": [ 00:15:13.176 "bfa3c42f-6364-4b9d-ade1-2ff90a36241d" 00:15:13.176 ], 00:15:13.176 "product_name": "Raid Volume", 00:15:13.176 "block_size": 512, 00:15:13.176 "num_blocks": 131072, 00:15:13.176 "uuid": "bfa3c42f-6364-4b9d-ade1-2ff90a36241d", 00:15:13.176 "assigned_rate_limits": { 00:15:13.176 "rw_ios_per_sec": 0, 00:15:13.176 "rw_mbytes_per_sec": 0, 00:15:13.176 "r_mbytes_per_sec": 0, 00:15:13.176 "w_mbytes_per_sec": 0 00:15:13.176 }, 00:15:13.176 "claimed": false, 00:15:13.176 "zoned": false, 00:15:13.176 "supported_io_types": { 00:15:13.176 "read": true, 00:15:13.176 "write": true, 00:15:13.176 "unmap": false, 00:15:13.176 "flush": false, 00:15:13.176 "reset": true, 00:15:13.176 "nvme_admin": false, 00:15:13.176 "nvme_io": false, 00:15:13.176 "nvme_io_md": false, 00:15:13.176 "write_zeroes": true, 00:15:13.176 "zcopy": false, 00:15:13.176 "get_zone_info": false, 00:15:13.176 "zone_management": false, 00:15:13.176 "zone_append": false, 00:15:13.176 "compare": false, 00:15:13.176 "compare_and_write": false, 00:15:13.176 "abort": false, 00:15:13.176 "seek_hole": false, 00:15:13.176 "seek_data": false, 00:15:13.176 "copy": false, 00:15:13.176 "nvme_iov_md": false 00:15:13.176 }, 00:15:13.176 "driver_specific": { 00:15:13.176 "raid": { 00:15:13.176 "uuid": "bfa3c42f-6364-4b9d-ade1-2ff90a36241d", 00:15:13.176 "strip_size_kb": 64, 00:15:13.176 "state": "online", 00:15:13.176 "raid_level": "raid5f", 00:15:13.177 "superblock": false, 00:15:13.177 "num_base_bdevs": 3, 00:15:13.177 "num_base_bdevs_discovered": 3, 00:15:13.177 "num_base_bdevs_operational": 3, 00:15:13.177 "base_bdevs_list": [ 00:15:13.177 { 00:15:13.177 "name": "BaseBdev1", 00:15:13.177 "uuid": "d0b2b2fc-19f5-48c7-a0f6-4599d962603f", 00:15:13.177 "is_configured": true, 00:15:13.177 "data_offset": 0, 00:15:13.177 "data_size": 65536 00:15:13.177 }, 00:15:13.177 { 00:15:13.177 "name": "BaseBdev2", 00:15:13.177 "uuid": "2bdd385d-016d-443c-a357-f16a674b1324", 00:15:13.177 "is_configured": true, 00:15:13.177 "data_offset": 0, 00:15:13.177 "data_size": 65536 00:15:13.177 }, 00:15:13.177 { 00:15:13.177 "name": "BaseBdev3", 00:15:13.177 "uuid": "bba1c6c7-d2e5-4c67-a5d2-7271b8ff4c14", 00:15:13.177 "is_configured": true, 00:15:13.177 "data_offset": 0, 00:15:13.177 "data_size": 65536 00:15:13.177 } 00:15:13.177 ] 00:15:13.177 } 00:15:13.177 } 00:15:13.177 }' 00:15:13.177 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.436 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:13.436 BaseBdev2 00:15:13.436 BaseBdev3' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.437 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 [2024-11-18 13:31:43.421859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.695 "name": "Existed_Raid", 00:15:13.695 "uuid": "bfa3c42f-6364-4b9d-ade1-2ff90a36241d", 00:15:13.695 "strip_size_kb": 64, 00:15:13.695 "state": "online", 00:15:13.695 "raid_level": "raid5f", 00:15:13.695 "superblock": false, 00:15:13.695 "num_base_bdevs": 3, 00:15:13.695 "num_base_bdevs_discovered": 2, 00:15:13.695 "num_base_bdevs_operational": 2, 00:15:13.695 "base_bdevs_list": [ 00:15:13.695 { 00:15:13.695 "name": null, 00:15:13.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.695 "is_configured": false, 00:15:13.695 "data_offset": 0, 00:15:13.695 "data_size": 65536 00:15:13.695 }, 00:15:13.695 { 00:15:13.695 "name": "BaseBdev2", 00:15:13.695 "uuid": "2bdd385d-016d-443c-a357-f16a674b1324", 00:15:13.695 "is_configured": true, 00:15:13.695 "data_offset": 0, 00:15:13.695 "data_size": 65536 00:15:13.695 }, 00:15:13.695 { 00:15:13.695 "name": "BaseBdev3", 00:15:13.695 "uuid": "bba1c6c7-d2e5-4c67-a5d2-7271b8ff4c14", 00:15:13.695 "is_configured": true, 00:15:13.695 "data_offset": 0, 00:15:13.695 "data_size": 65536 00:15:13.695 } 00:15:13.695 ] 00:15:13.695 }' 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.695 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.954 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:13.954 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.954 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.954 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.954 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 [2024-11-18 13:31:43.883135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.955 [2024-11-18 13:31:43.883245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.955 [2024-11-18 13:31:43.970206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 13:31:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.214 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:14.214 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:14.214 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:14.214 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 [2024-11-18 13:31:44.030116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:14.215 [2024-11-18 13:31:44.030185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 BaseBdev2 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 [ 00:15:14.215 { 00:15:14.215 "name": "BaseBdev2", 00:15:14.215 "aliases": [ 00:15:14.215 "baef0399-1fa5-4278-95cd-36c728f80de8" 00:15:14.215 ], 00:15:14.215 "product_name": "Malloc disk", 00:15:14.215 "block_size": 512, 00:15:14.215 "num_blocks": 65536, 00:15:14.215 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:14.215 "assigned_rate_limits": { 00:15:14.215 "rw_ios_per_sec": 0, 00:15:14.215 "rw_mbytes_per_sec": 0, 00:15:14.215 "r_mbytes_per_sec": 0, 00:15:14.215 "w_mbytes_per_sec": 0 00:15:14.215 }, 00:15:14.215 "claimed": false, 00:15:14.215 "zoned": false, 00:15:14.215 "supported_io_types": { 00:15:14.215 "read": true, 00:15:14.215 "write": true, 00:15:14.215 "unmap": true, 00:15:14.215 "flush": true, 00:15:14.215 "reset": true, 00:15:14.215 "nvme_admin": false, 00:15:14.215 "nvme_io": false, 00:15:14.215 "nvme_io_md": false, 00:15:14.215 "write_zeroes": true, 00:15:14.215 "zcopy": true, 00:15:14.215 "get_zone_info": false, 00:15:14.215 "zone_management": false, 00:15:14.215 "zone_append": false, 00:15:14.215 "compare": false, 00:15:14.215 "compare_and_write": false, 00:15:14.215 "abort": true, 00:15:14.215 "seek_hole": false, 00:15:14.215 "seek_data": false, 00:15:14.215 "copy": true, 00:15:14.215 "nvme_iov_md": false 00:15:14.215 }, 00:15:14.215 "memory_domains": [ 00:15:14.215 { 00:15:14.215 "dma_device_id": "system", 00:15:14.215 "dma_device_type": 1 00:15:14.215 }, 00:15:14.215 { 00:15:14.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.215 "dma_device_type": 2 00:15:14.215 } 00:15:14.215 ], 00:15:14.215 "driver_specific": {} 00:15:14.215 } 00:15:14.215 ] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.215 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.475 BaseBdev3 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.475 [ 00:15:14.475 { 00:15:14.475 "name": "BaseBdev3", 00:15:14.475 "aliases": [ 00:15:14.475 "485bf842-5c1b-4652-88fb-f17715ce0f9d" 00:15:14.475 ], 00:15:14.475 "product_name": "Malloc disk", 00:15:14.475 "block_size": 512, 00:15:14.475 "num_blocks": 65536, 00:15:14.475 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:14.475 "assigned_rate_limits": { 00:15:14.475 "rw_ios_per_sec": 0, 00:15:14.475 "rw_mbytes_per_sec": 0, 00:15:14.475 "r_mbytes_per_sec": 0, 00:15:14.475 "w_mbytes_per_sec": 0 00:15:14.475 }, 00:15:14.475 "claimed": false, 00:15:14.475 "zoned": false, 00:15:14.475 "supported_io_types": { 00:15:14.475 "read": true, 00:15:14.475 "write": true, 00:15:14.475 "unmap": true, 00:15:14.475 "flush": true, 00:15:14.475 "reset": true, 00:15:14.475 "nvme_admin": false, 00:15:14.475 "nvme_io": false, 00:15:14.475 "nvme_io_md": false, 00:15:14.475 "write_zeroes": true, 00:15:14.475 "zcopy": true, 00:15:14.475 "get_zone_info": false, 00:15:14.475 "zone_management": false, 00:15:14.475 "zone_append": false, 00:15:14.475 "compare": false, 00:15:14.475 "compare_and_write": false, 00:15:14.475 "abort": true, 00:15:14.475 "seek_hole": false, 00:15:14.475 "seek_data": false, 00:15:14.475 "copy": true, 00:15:14.475 "nvme_iov_md": false 00:15:14.475 }, 00:15:14.475 "memory_domains": [ 00:15:14.475 { 00:15:14.475 "dma_device_id": "system", 00:15:14.475 "dma_device_type": 1 00:15:14.475 }, 00:15:14.475 { 00:15:14.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.475 "dma_device_type": 2 00:15:14.475 } 00:15:14.475 ], 00:15:14.475 "driver_specific": {} 00:15:14.475 } 00:15:14.475 ] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.475 [2024-11-18 13:31:44.331216] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.475 [2024-11-18 13:31:44.331267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.475 [2024-11-18 13:31:44.331288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.475 [2024-11-18 13:31:44.332945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.475 "name": "Existed_Raid", 00:15:14.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.475 "strip_size_kb": 64, 00:15:14.475 "state": "configuring", 00:15:14.475 "raid_level": "raid5f", 00:15:14.475 "superblock": false, 00:15:14.475 "num_base_bdevs": 3, 00:15:14.475 "num_base_bdevs_discovered": 2, 00:15:14.475 "num_base_bdevs_operational": 3, 00:15:14.475 "base_bdevs_list": [ 00:15:14.475 { 00:15:14.475 "name": "BaseBdev1", 00:15:14.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.475 "is_configured": false, 00:15:14.475 "data_offset": 0, 00:15:14.475 "data_size": 0 00:15:14.475 }, 00:15:14.475 { 00:15:14.475 "name": "BaseBdev2", 00:15:14.475 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:14.475 "is_configured": true, 00:15:14.475 "data_offset": 0, 00:15:14.475 "data_size": 65536 00:15:14.475 }, 00:15:14.475 { 00:15:14.475 "name": "BaseBdev3", 00:15:14.475 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:14.475 "is_configured": true, 00:15:14.475 "data_offset": 0, 00:15:14.475 "data_size": 65536 00:15:14.475 } 00:15:14.475 ] 00:15:14.475 }' 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.475 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.734 [2024-11-18 13:31:44.778485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.734 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.992 "name": "Existed_Raid", 00:15:14.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.992 "strip_size_kb": 64, 00:15:14.992 "state": "configuring", 00:15:14.992 "raid_level": "raid5f", 00:15:14.992 "superblock": false, 00:15:14.992 "num_base_bdevs": 3, 00:15:14.992 "num_base_bdevs_discovered": 1, 00:15:14.992 "num_base_bdevs_operational": 3, 00:15:14.992 "base_bdevs_list": [ 00:15:14.992 { 00:15:14.992 "name": "BaseBdev1", 00:15:14.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.992 "is_configured": false, 00:15:14.992 "data_offset": 0, 00:15:14.992 "data_size": 0 00:15:14.992 }, 00:15:14.992 { 00:15:14.992 "name": null, 00:15:14.992 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:14.992 "is_configured": false, 00:15:14.992 "data_offset": 0, 00:15:14.992 "data_size": 65536 00:15:14.992 }, 00:15:14.992 { 00:15:14.992 "name": "BaseBdev3", 00:15:14.992 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:14.992 "is_configured": true, 00:15:14.992 "data_offset": 0, 00:15:14.992 "data_size": 65536 00:15:14.992 } 00:15:14.992 ] 00:15:14.992 }' 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.992 13:31:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.251 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.510 [2024-11-18 13:31:45.321395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.510 BaseBdev1 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.510 [ 00:15:15.510 { 00:15:15.510 "name": "BaseBdev1", 00:15:15.510 "aliases": [ 00:15:15.510 "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51" 00:15:15.510 ], 00:15:15.510 "product_name": "Malloc disk", 00:15:15.510 "block_size": 512, 00:15:15.510 "num_blocks": 65536, 00:15:15.510 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:15.510 "assigned_rate_limits": { 00:15:15.510 "rw_ios_per_sec": 0, 00:15:15.510 "rw_mbytes_per_sec": 0, 00:15:15.510 "r_mbytes_per_sec": 0, 00:15:15.510 "w_mbytes_per_sec": 0 00:15:15.510 }, 00:15:15.510 "claimed": true, 00:15:15.510 "claim_type": "exclusive_write", 00:15:15.510 "zoned": false, 00:15:15.510 "supported_io_types": { 00:15:15.510 "read": true, 00:15:15.510 "write": true, 00:15:15.510 "unmap": true, 00:15:15.510 "flush": true, 00:15:15.510 "reset": true, 00:15:15.510 "nvme_admin": false, 00:15:15.510 "nvme_io": false, 00:15:15.510 "nvme_io_md": false, 00:15:15.510 "write_zeroes": true, 00:15:15.510 "zcopy": true, 00:15:15.510 "get_zone_info": false, 00:15:15.510 "zone_management": false, 00:15:15.510 "zone_append": false, 00:15:15.510 "compare": false, 00:15:15.510 "compare_and_write": false, 00:15:15.510 "abort": true, 00:15:15.510 "seek_hole": false, 00:15:15.510 "seek_data": false, 00:15:15.510 "copy": true, 00:15:15.510 "nvme_iov_md": false 00:15:15.510 }, 00:15:15.510 "memory_domains": [ 00:15:15.510 { 00:15:15.510 "dma_device_id": "system", 00:15:15.510 "dma_device_type": 1 00:15:15.510 }, 00:15:15.510 { 00:15:15.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.510 "dma_device_type": 2 00:15:15.510 } 00:15:15.510 ], 00:15:15.510 "driver_specific": {} 00:15:15.510 } 00:15:15.510 ] 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.510 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.511 "name": "Existed_Raid", 00:15:15.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.511 "strip_size_kb": 64, 00:15:15.511 "state": "configuring", 00:15:15.511 "raid_level": "raid5f", 00:15:15.511 "superblock": false, 00:15:15.511 "num_base_bdevs": 3, 00:15:15.511 "num_base_bdevs_discovered": 2, 00:15:15.511 "num_base_bdevs_operational": 3, 00:15:15.511 "base_bdevs_list": [ 00:15:15.511 { 00:15:15.511 "name": "BaseBdev1", 00:15:15.511 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:15.511 "is_configured": true, 00:15:15.511 "data_offset": 0, 00:15:15.511 "data_size": 65536 00:15:15.511 }, 00:15:15.511 { 00:15:15.511 "name": null, 00:15:15.511 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:15.511 "is_configured": false, 00:15:15.511 "data_offset": 0, 00:15:15.511 "data_size": 65536 00:15:15.511 }, 00:15:15.511 { 00:15:15.511 "name": "BaseBdev3", 00:15:15.511 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:15.511 "is_configured": true, 00:15:15.511 "data_offset": 0, 00:15:15.511 "data_size": 65536 00:15:15.511 } 00:15:15.511 ] 00:15:15.511 }' 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.511 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.769 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.769 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.769 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.028 [2024-11-18 13:31:45.852481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.028 "name": "Existed_Raid", 00:15:16.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.028 "strip_size_kb": 64, 00:15:16.028 "state": "configuring", 00:15:16.028 "raid_level": "raid5f", 00:15:16.028 "superblock": false, 00:15:16.028 "num_base_bdevs": 3, 00:15:16.028 "num_base_bdevs_discovered": 1, 00:15:16.028 "num_base_bdevs_operational": 3, 00:15:16.028 "base_bdevs_list": [ 00:15:16.028 { 00:15:16.028 "name": "BaseBdev1", 00:15:16.028 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:16.028 "is_configured": true, 00:15:16.028 "data_offset": 0, 00:15:16.028 "data_size": 65536 00:15:16.028 }, 00:15:16.028 { 00:15:16.028 "name": null, 00:15:16.028 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:16.028 "is_configured": false, 00:15:16.028 "data_offset": 0, 00:15:16.028 "data_size": 65536 00:15:16.028 }, 00:15:16.028 { 00:15:16.028 "name": null, 00:15:16.028 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:16.028 "is_configured": false, 00:15:16.028 "data_offset": 0, 00:15:16.028 "data_size": 65536 00:15:16.028 } 00:15:16.028 ] 00:15:16.028 }' 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.028 13:31:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.287 [2024-11-18 13:31:46.311741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.287 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.288 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.547 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.547 "name": "Existed_Raid", 00:15:16.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.547 "strip_size_kb": 64, 00:15:16.547 "state": "configuring", 00:15:16.547 "raid_level": "raid5f", 00:15:16.547 "superblock": false, 00:15:16.547 "num_base_bdevs": 3, 00:15:16.547 "num_base_bdevs_discovered": 2, 00:15:16.547 "num_base_bdevs_operational": 3, 00:15:16.547 "base_bdevs_list": [ 00:15:16.547 { 00:15:16.547 "name": "BaseBdev1", 00:15:16.547 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:16.547 "is_configured": true, 00:15:16.547 "data_offset": 0, 00:15:16.547 "data_size": 65536 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "name": null, 00:15:16.547 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:16.547 "is_configured": false, 00:15:16.547 "data_offset": 0, 00:15:16.547 "data_size": 65536 00:15:16.547 }, 00:15:16.547 { 00:15:16.547 "name": "BaseBdev3", 00:15:16.547 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:16.547 "is_configured": true, 00:15:16.547 "data_offset": 0, 00:15:16.547 "data_size": 65536 00:15:16.547 } 00:15:16.547 ] 00:15:16.547 }' 00:15:16.547 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.547 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.807 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.807 [2024-11-18 13:31:46.790921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.066 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.066 "name": "Existed_Raid", 00:15:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.066 "strip_size_kb": 64, 00:15:17.066 "state": "configuring", 00:15:17.066 "raid_level": "raid5f", 00:15:17.066 "superblock": false, 00:15:17.066 "num_base_bdevs": 3, 00:15:17.066 "num_base_bdevs_discovered": 1, 00:15:17.066 "num_base_bdevs_operational": 3, 00:15:17.066 "base_bdevs_list": [ 00:15:17.066 { 00:15:17.066 "name": null, 00:15:17.066 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:17.066 "is_configured": false, 00:15:17.066 "data_offset": 0, 00:15:17.066 "data_size": 65536 00:15:17.066 }, 00:15:17.066 { 00:15:17.066 "name": null, 00:15:17.066 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:17.066 "is_configured": false, 00:15:17.066 "data_offset": 0, 00:15:17.066 "data_size": 65536 00:15:17.066 }, 00:15:17.066 { 00:15:17.066 "name": "BaseBdev3", 00:15:17.067 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:17.067 "is_configured": true, 00:15:17.067 "data_offset": 0, 00:15:17.067 "data_size": 65536 00:15:17.067 } 00:15:17.067 ] 00:15:17.067 }' 00:15:17.067 13:31:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.067 13:31:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.325 [2024-11-18 13:31:47.356588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.325 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.583 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.583 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.583 "name": "Existed_Raid", 00:15:17.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.583 "strip_size_kb": 64, 00:15:17.583 "state": "configuring", 00:15:17.583 "raid_level": "raid5f", 00:15:17.583 "superblock": false, 00:15:17.583 "num_base_bdevs": 3, 00:15:17.583 "num_base_bdevs_discovered": 2, 00:15:17.583 "num_base_bdevs_operational": 3, 00:15:17.583 "base_bdevs_list": [ 00:15:17.583 { 00:15:17.583 "name": null, 00:15:17.583 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:17.583 "is_configured": false, 00:15:17.583 "data_offset": 0, 00:15:17.583 "data_size": 65536 00:15:17.583 }, 00:15:17.583 { 00:15:17.583 "name": "BaseBdev2", 00:15:17.583 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:17.583 "is_configured": true, 00:15:17.583 "data_offset": 0, 00:15:17.583 "data_size": 65536 00:15:17.583 }, 00:15:17.583 { 00:15:17.583 "name": "BaseBdev3", 00:15:17.583 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:17.583 "is_configured": true, 00:15:17.583 "data_offset": 0, 00:15:17.583 "data_size": 65536 00:15:17.583 } 00:15:17.583 ] 00:15:17.583 }' 00:15:17.583 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.584 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.102 [2024-11-18 13:31:47.958836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:18.102 [2024-11-18 13:31:47.958883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:18.102 [2024-11-18 13:31:47.958893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:18.102 [2024-11-18 13:31:47.959117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:18.102 [2024-11-18 13:31:47.963871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:18.102 [2024-11-18 13:31:47.963904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:18.102 [2024-11-18 13:31:47.964190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.102 NewBaseBdev 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.102 13:31:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.102 [ 00:15:18.102 { 00:15:18.102 "name": "NewBaseBdev", 00:15:18.102 "aliases": [ 00:15:18.102 "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51" 00:15:18.102 ], 00:15:18.102 "product_name": "Malloc disk", 00:15:18.102 "block_size": 512, 00:15:18.102 "num_blocks": 65536, 00:15:18.102 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:18.102 "assigned_rate_limits": { 00:15:18.102 "rw_ios_per_sec": 0, 00:15:18.102 "rw_mbytes_per_sec": 0, 00:15:18.102 "r_mbytes_per_sec": 0, 00:15:18.102 "w_mbytes_per_sec": 0 00:15:18.102 }, 00:15:18.102 "claimed": true, 00:15:18.102 "claim_type": "exclusive_write", 00:15:18.102 "zoned": false, 00:15:18.102 "supported_io_types": { 00:15:18.102 "read": true, 00:15:18.102 "write": true, 00:15:18.102 "unmap": true, 00:15:18.102 "flush": true, 00:15:18.102 "reset": true, 00:15:18.102 "nvme_admin": false, 00:15:18.102 "nvme_io": false, 00:15:18.102 "nvme_io_md": false, 00:15:18.102 "write_zeroes": true, 00:15:18.102 "zcopy": true, 00:15:18.102 "get_zone_info": false, 00:15:18.102 "zone_management": false, 00:15:18.102 "zone_append": false, 00:15:18.102 "compare": false, 00:15:18.102 "compare_and_write": false, 00:15:18.102 "abort": true, 00:15:18.102 "seek_hole": false, 00:15:18.102 "seek_data": false, 00:15:18.102 "copy": true, 00:15:18.102 "nvme_iov_md": false 00:15:18.102 }, 00:15:18.102 "memory_domains": [ 00:15:18.102 { 00:15:18.102 "dma_device_id": "system", 00:15:18.102 "dma_device_type": 1 00:15:18.102 }, 00:15:18.102 { 00:15:18.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.102 "dma_device_type": 2 00:15:18.102 } 00:15:18.102 ], 00:15:18.102 "driver_specific": {} 00:15:18.102 } 00:15:18.102 ] 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.102 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.102 "name": "Existed_Raid", 00:15:18.103 "uuid": "4bbb59c1-ba80-4c36-a12e-e7b2a0eadf15", 00:15:18.103 "strip_size_kb": 64, 00:15:18.103 "state": "online", 00:15:18.103 "raid_level": "raid5f", 00:15:18.103 "superblock": false, 00:15:18.103 "num_base_bdevs": 3, 00:15:18.103 "num_base_bdevs_discovered": 3, 00:15:18.103 "num_base_bdevs_operational": 3, 00:15:18.103 "base_bdevs_list": [ 00:15:18.103 { 00:15:18.103 "name": "NewBaseBdev", 00:15:18.103 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:18.103 "is_configured": true, 00:15:18.103 "data_offset": 0, 00:15:18.103 "data_size": 65536 00:15:18.103 }, 00:15:18.103 { 00:15:18.103 "name": "BaseBdev2", 00:15:18.103 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:18.103 "is_configured": true, 00:15:18.103 "data_offset": 0, 00:15:18.103 "data_size": 65536 00:15:18.103 }, 00:15:18.103 { 00:15:18.103 "name": "BaseBdev3", 00:15:18.103 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:18.103 "is_configured": true, 00:15:18.103 "data_offset": 0, 00:15:18.103 "data_size": 65536 00:15:18.103 } 00:15:18.103 ] 00:15:18.103 }' 00:15:18.103 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.103 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.362 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.362 [2024-11-18 13:31:48.405378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:18.623 "name": "Existed_Raid", 00:15:18.623 "aliases": [ 00:15:18.623 "4bbb59c1-ba80-4c36-a12e-e7b2a0eadf15" 00:15:18.623 ], 00:15:18.623 "product_name": "Raid Volume", 00:15:18.623 "block_size": 512, 00:15:18.623 "num_blocks": 131072, 00:15:18.623 "uuid": "4bbb59c1-ba80-4c36-a12e-e7b2a0eadf15", 00:15:18.623 "assigned_rate_limits": { 00:15:18.623 "rw_ios_per_sec": 0, 00:15:18.623 "rw_mbytes_per_sec": 0, 00:15:18.623 "r_mbytes_per_sec": 0, 00:15:18.623 "w_mbytes_per_sec": 0 00:15:18.623 }, 00:15:18.623 "claimed": false, 00:15:18.623 "zoned": false, 00:15:18.623 "supported_io_types": { 00:15:18.623 "read": true, 00:15:18.623 "write": true, 00:15:18.623 "unmap": false, 00:15:18.623 "flush": false, 00:15:18.623 "reset": true, 00:15:18.623 "nvme_admin": false, 00:15:18.623 "nvme_io": false, 00:15:18.623 "nvme_io_md": false, 00:15:18.623 "write_zeroes": true, 00:15:18.623 "zcopy": false, 00:15:18.623 "get_zone_info": false, 00:15:18.623 "zone_management": false, 00:15:18.623 "zone_append": false, 00:15:18.623 "compare": false, 00:15:18.623 "compare_and_write": false, 00:15:18.623 "abort": false, 00:15:18.623 "seek_hole": false, 00:15:18.623 "seek_data": false, 00:15:18.623 "copy": false, 00:15:18.623 "nvme_iov_md": false 00:15:18.623 }, 00:15:18.623 "driver_specific": { 00:15:18.623 "raid": { 00:15:18.623 "uuid": "4bbb59c1-ba80-4c36-a12e-e7b2a0eadf15", 00:15:18.623 "strip_size_kb": 64, 00:15:18.623 "state": "online", 00:15:18.623 "raid_level": "raid5f", 00:15:18.623 "superblock": false, 00:15:18.623 "num_base_bdevs": 3, 00:15:18.623 "num_base_bdevs_discovered": 3, 00:15:18.623 "num_base_bdevs_operational": 3, 00:15:18.623 "base_bdevs_list": [ 00:15:18.623 { 00:15:18.623 "name": "NewBaseBdev", 00:15:18.623 "uuid": "2d9567b9-f94a-4c0e-a4f3-e99cf44b8e51", 00:15:18.623 "is_configured": true, 00:15:18.623 "data_offset": 0, 00:15:18.623 "data_size": 65536 00:15:18.623 }, 00:15:18.623 { 00:15:18.623 "name": "BaseBdev2", 00:15:18.623 "uuid": "baef0399-1fa5-4278-95cd-36c728f80de8", 00:15:18.623 "is_configured": true, 00:15:18.623 "data_offset": 0, 00:15:18.623 "data_size": 65536 00:15:18.623 }, 00:15:18.623 { 00:15:18.623 "name": "BaseBdev3", 00:15:18.623 "uuid": "485bf842-5c1b-4652-88fb-f17715ce0f9d", 00:15:18.623 "is_configured": true, 00:15:18.623 "data_offset": 0, 00:15:18.623 "data_size": 65536 00:15:18.623 } 00:15:18.623 ] 00:15:18.623 } 00:15:18.623 } 00:15:18.623 }' 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:18.623 BaseBdev2 00:15:18.623 BaseBdev3' 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:18.623 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.624 [2024-11-18 13:31:48.624871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:18.624 [2024-11-18 13:31:48.624897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.624 [2024-11-18 13:31:48.624957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.624 [2024-11-18 13:31:48.625229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.624 [2024-11-18 13:31:48.625249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79837 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79837 ']' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79837 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79837 00:15:18.624 killing process with pid 79837 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79837' 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79837 00:15:18.624 [2024-11-18 13:31:48.671509] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.624 13:31:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79837 00:15:19.193 [2024-11-18 13:31:48.956245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.130 13:31:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:20.130 00:15:20.130 real 0m10.314s 00:15:20.130 user 0m16.344s 00:15:20.130 sys 0m1.925s 00:15:20.130 13:31:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.130 13:31:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.130 ************************************ 00:15:20.130 END TEST raid5f_state_function_test 00:15:20.130 ************************************ 00:15:20.130 13:31:50 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:20.130 13:31:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:20.130 13:31:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.130 13:31:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:20.130 ************************************ 00:15:20.130 START TEST raid5f_state_function_test_sb 00:15:20.130 ************************************ 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.130 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80456 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:20.131 Process raid pid: 80456 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80456' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80456 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80456 ']' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.131 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 [2024-11-18 13:31:50.179680] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:20.131 [2024-11-18 13:31:50.179815] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.390 [2024-11-18 13:31:50.360113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.650 [2024-11-18 13:31:50.466075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.650 [2024-11-18 13:31:50.652590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.650 [2024-11-18 13:31:50.652620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.218 [2024-11-18 13:31:50.979653] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.218 [2024-11-18 13:31:50.979705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.218 [2024-11-18 13:31:50.979716] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.218 [2024-11-18 13:31:50.979726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.218 [2024-11-18 13:31:50.979733] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.218 [2024-11-18 13:31:50.979741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.218 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.219 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.219 13:31:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.219 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.219 13:31:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.219 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.219 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.219 "name": "Existed_Raid", 00:15:21.219 "uuid": "f6463f14-11ca-4ac0-a057-7138c857ee0b", 00:15:21.219 "strip_size_kb": 64, 00:15:21.219 "state": "configuring", 00:15:21.219 "raid_level": "raid5f", 00:15:21.219 "superblock": true, 00:15:21.219 "num_base_bdevs": 3, 00:15:21.219 "num_base_bdevs_discovered": 0, 00:15:21.219 "num_base_bdevs_operational": 3, 00:15:21.219 "base_bdevs_list": [ 00:15:21.219 { 00:15:21.219 "name": "BaseBdev1", 00:15:21.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.219 "is_configured": false, 00:15:21.219 "data_offset": 0, 00:15:21.219 "data_size": 0 00:15:21.219 }, 00:15:21.219 { 00:15:21.219 "name": "BaseBdev2", 00:15:21.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.219 "is_configured": false, 00:15:21.219 "data_offset": 0, 00:15:21.219 "data_size": 0 00:15:21.219 }, 00:15:21.219 { 00:15:21.219 "name": "BaseBdev3", 00:15:21.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.219 "is_configured": false, 00:15:21.219 "data_offset": 0, 00:15:21.219 "data_size": 0 00:15:21.219 } 00:15:21.219 ] 00:15:21.219 }' 00:15:21.219 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.219 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 [2024-11-18 13:31:51.374893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.479 [2024-11-18 13:31:51.374927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 [2024-11-18 13:31:51.386888] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.479 [2024-11-18 13:31:51.386928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.479 [2024-11-18 13:31:51.386936] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.479 [2024-11-18 13:31:51.386945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.479 [2024-11-18 13:31:51.386951] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.479 [2024-11-18 13:31:51.386959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 [2024-11-18 13:31:51.432996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.479 BaseBdev1 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 [ 00:15:21.479 { 00:15:21.479 "name": "BaseBdev1", 00:15:21.479 "aliases": [ 00:15:21.479 "7b44d0ef-9002-4696-b6cb-d00382d42be3" 00:15:21.479 ], 00:15:21.479 "product_name": "Malloc disk", 00:15:21.479 "block_size": 512, 00:15:21.479 "num_blocks": 65536, 00:15:21.479 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:21.479 "assigned_rate_limits": { 00:15:21.479 "rw_ios_per_sec": 0, 00:15:21.479 "rw_mbytes_per_sec": 0, 00:15:21.479 "r_mbytes_per_sec": 0, 00:15:21.479 "w_mbytes_per_sec": 0 00:15:21.479 }, 00:15:21.479 "claimed": true, 00:15:21.479 "claim_type": "exclusive_write", 00:15:21.479 "zoned": false, 00:15:21.479 "supported_io_types": { 00:15:21.479 "read": true, 00:15:21.479 "write": true, 00:15:21.479 "unmap": true, 00:15:21.479 "flush": true, 00:15:21.479 "reset": true, 00:15:21.479 "nvme_admin": false, 00:15:21.479 "nvme_io": false, 00:15:21.479 "nvme_io_md": false, 00:15:21.479 "write_zeroes": true, 00:15:21.479 "zcopy": true, 00:15:21.479 "get_zone_info": false, 00:15:21.479 "zone_management": false, 00:15:21.479 "zone_append": false, 00:15:21.479 "compare": false, 00:15:21.479 "compare_and_write": false, 00:15:21.479 "abort": true, 00:15:21.479 "seek_hole": false, 00:15:21.479 "seek_data": false, 00:15:21.479 "copy": true, 00:15:21.479 "nvme_iov_md": false 00:15:21.479 }, 00:15:21.479 "memory_domains": [ 00:15:21.479 { 00:15:21.479 "dma_device_id": "system", 00:15:21.479 "dma_device_type": 1 00:15:21.479 }, 00:15:21.479 { 00:15:21.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.479 "dma_device_type": 2 00:15:21.479 } 00:15:21.479 ], 00:15:21.479 "driver_specific": {} 00:15:21.479 } 00:15:21.479 ] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.479 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.479 "name": "Existed_Raid", 00:15:21.479 "uuid": "d3f01451-e170-49b9-aac0-441c5ebd6e20", 00:15:21.479 "strip_size_kb": 64, 00:15:21.479 "state": "configuring", 00:15:21.479 "raid_level": "raid5f", 00:15:21.479 "superblock": true, 00:15:21.479 "num_base_bdevs": 3, 00:15:21.479 "num_base_bdevs_discovered": 1, 00:15:21.479 "num_base_bdevs_operational": 3, 00:15:21.479 "base_bdevs_list": [ 00:15:21.479 { 00:15:21.479 "name": "BaseBdev1", 00:15:21.479 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:21.479 "is_configured": true, 00:15:21.479 "data_offset": 2048, 00:15:21.479 "data_size": 63488 00:15:21.479 }, 00:15:21.479 { 00:15:21.479 "name": "BaseBdev2", 00:15:21.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.479 "is_configured": false, 00:15:21.479 "data_offset": 0, 00:15:21.479 "data_size": 0 00:15:21.479 }, 00:15:21.479 { 00:15:21.479 "name": "BaseBdev3", 00:15:21.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.479 "is_configured": false, 00:15:21.479 "data_offset": 0, 00:15:21.479 "data_size": 0 00:15:21.479 } 00:15:21.479 ] 00:15:21.479 }' 00:15:21.480 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.480 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.066 [2024-11-18 13:31:51.928194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.066 [2024-11-18 13:31:51.928234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.066 [2024-11-18 13:31:51.940225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.066 [2024-11-18 13:31:51.941849] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.066 [2024-11-18 13:31:51.941890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.066 [2024-11-18 13:31:51.941900] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.066 [2024-11-18 13:31:51.941909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.066 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.067 "name": "Existed_Raid", 00:15:22.067 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:22.067 "strip_size_kb": 64, 00:15:22.067 "state": "configuring", 00:15:22.067 "raid_level": "raid5f", 00:15:22.067 "superblock": true, 00:15:22.067 "num_base_bdevs": 3, 00:15:22.067 "num_base_bdevs_discovered": 1, 00:15:22.067 "num_base_bdevs_operational": 3, 00:15:22.067 "base_bdevs_list": [ 00:15:22.067 { 00:15:22.067 "name": "BaseBdev1", 00:15:22.067 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:22.067 "is_configured": true, 00:15:22.067 "data_offset": 2048, 00:15:22.067 "data_size": 63488 00:15:22.067 }, 00:15:22.067 { 00:15:22.067 "name": "BaseBdev2", 00:15:22.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.067 "is_configured": false, 00:15:22.067 "data_offset": 0, 00:15:22.067 "data_size": 0 00:15:22.067 }, 00:15:22.067 { 00:15:22.067 "name": "BaseBdev3", 00:15:22.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.067 "is_configured": false, 00:15:22.067 "data_offset": 0, 00:15:22.067 "data_size": 0 00:15:22.067 } 00:15:22.067 ] 00:15:22.067 }' 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.067 13:31:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.636 [2024-11-18 13:31:52.425237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.636 BaseBdev2 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.636 [ 00:15:22.636 { 00:15:22.636 "name": "BaseBdev2", 00:15:22.636 "aliases": [ 00:15:22.636 "94206440-1193-4ebe-a426-e22a10867d7b" 00:15:22.636 ], 00:15:22.636 "product_name": "Malloc disk", 00:15:22.636 "block_size": 512, 00:15:22.636 "num_blocks": 65536, 00:15:22.636 "uuid": "94206440-1193-4ebe-a426-e22a10867d7b", 00:15:22.636 "assigned_rate_limits": { 00:15:22.636 "rw_ios_per_sec": 0, 00:15:22.636 "rw_mbytes_per_sec": 0, 00:15:22.636 "r_mbytes_per_sec": 0, 00:15:22.636 "w_mbytes_per_sec": 0 00:15:22.636 }, 00:15:22.636 "claimed": true, 00:15:22.636 "claim_type": "exclusive_write", 00:15:22.636 "zoned": false, 00:15:22.636 "supported_io_types": { 00:15:22.636 "read": true, 00:15:22.636 "write": true, 00:15:22.636 "unmap": true, 00:15:22.636 "flush": true, 00:15:22.636 "reset": true, 00:15:22.636 "nvme_admin": false, 00:15:22.636 "nvme_io": false, 00:15:22.636 "nvme_io_md": false, 00:15:22.636 "write_zeroes": true, 00:15:22.636 "zcopy": true, 00:15:22.636 "get_zone_info": false, 00:15:22.636 "zone_management": false, 00:15:22.636 "zone_append": false, 00:15:22.636 "compare": false, 00:15:22.636 "compare_and_write": false, 00:15:22.636 "abort": true, 00:15:22.636 "seek_hole": false, 00:15:22.636 "seek_data": false, 00:15:22.636 "copy": true, 00:15:22.636 "nvme_iov_md": false 00:15:22.636 }, 00:15:22.636 "memory_domains": [ 00:15:22.636 { 00:15:22.636 "dma_device_id": "system", 00:15:22.636 "dma_device_type": 1 00:15:22.636 }, 00:15:22.636 { 00:15:22.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.636 "dma_device_type": 2 00:15:22.636 } 00:15:22.636 ], 00:15:22.636 "driver_specific": {} 00:15:22.636 } 00:15:22.636 ] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.636 "name": "Existed_Raid", 00:15:22.636 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:22.636 "strip_size_kb": 64, 00:15:22.636 "state": "configuring", 00:15:22.636 "raid_level": "raid5f", 00:15:22.636 "superblock": true, 00:15:22.636 "num_base_bdevs": 3, 00:15:22.636 "num_base_bdevs_discovered": 2, 00:15:22.636 "num_base_bdevs_operational": 3, 00:15:22.636 "base_bdevs_list": [ 00:15:22.636 { 00:15:22.636 "name": "BaseBdev1", 00:15:22.636 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:22.636 "is_configured": true, 00:15:22.636 "data_offset": 2048, 00:15:22.636 "data_size": 63488 00:15:22.636 }, 00:15:22.636 { 00:15:22.636 "name": "BaseBdev2", 00:15:22.636 "uuid": "94206440-1193-4ebe-a426-e22a10867d7b", 00:15:22.636 "is_configured": true, 00:15:22.636 "data_offset": 2048, 00:15:22.636 "data_size": 63488 00:15:22.636 }, 00:15:22.636 { 00:15:22.636 "name": "BaseBdev3", 00:15:22.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.636 "is_configured": false, 00:15:22.636 "data_offset": 0, 00:15:22.636 "data_size": 0 00:15:22.636 } 00:15:22.636 ] 00:15:22.636 }' 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.636 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.895 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.895 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.895 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 [2024-11-18 13:31:52.992349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.155 [2024-11-18 13:31:52.992619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:23.155 [2024-11-18 13:31:52.992645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.155 [2024-11-18 13:31:52.992898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.155 BaseBdev3 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.155 13:31:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 [2024-11-18 13:31:52.998332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:23.155 [2024-11-18 13:31:52.998354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:23.155 [2024-11-18 13:31:52.998500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 [ 00:15:23.155 { 00:15:23.155 "name": "BaseBdev3", 00:15:23.155 "aliases": [ 00:15:23.155 "2b8e4520-a5a8-4dda-ba19-836ef278ad83" 00:15:23.155 ], 00:15:23.155 "product_name": "Malloc disk", 00:15:23.155 "block_size": 512, 00:15:23.155 "num_blocks": 65536, 00:15:23.155 "uuid": "2b8e4520-a5a8-4dda-ba19-836ef278ad83", 00:15:23.155 "assigned_rate_limits": { 00:15:23.155 "rw_ios_per_sec": 0, 00:15:23.155 "rw_mbytes_per_sec": 0, 00:15:23.155 "r_mbytes_per_sec": 0, 00:15:23.155 "w_mbytes_per_sec": 0 00:15:23.155 }, 00:15:23.155 "claimed": true, 00:15:23.155 "claim_type": "exclusive_write", 00:15:23.155 "zoned": false, 00:15:23.155 "supported_io_types": { 00:15:23.155 "read": true, 00:15:23.155 "write": true, 00:15:23.155 "unmap": true, 00:15:23.155 "flush": true, 00:15:23.155 "reset": true, 00:15:23.155 "nvme_admin": false, 00:15:23.155 "nvme_io": false, 00:15:23.155 "nvme_io_md": false, 00:15:23.155 "write_zeroes": true, 00:15:23.155 "zcopy": true, 00:15:23.155 "get_zone_info": false, 00:15:23.155 "zone_management": false, 00:15:23.155 "zone_append": false, 00:15:23.155 "compare": false, 00:15:23.155 "compare_and_write": false, 00:15:23.155 "abort": true, 00:15:23.155 "seek_hole": false, 00:15:23.155 "seek_data": false, 00:15:23.155 "copy": true, 00:15:23.155 "nvme_iov_md": false 00:15:23.155 }, 00:15:23.155 "memory_domains": [ 00:15:23.155 { 00:15:23.155 "dma_device_id": "system", 00:15:23.155 "dma_device_type": 1 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.155 "dma_device_type": 2 00:15:23.155 } 00:15:23.155 ], 00:15:23.155 "driver_specific": {} 00:15:23.155 } 00:15:23.155 ] 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.155 "name": "Existed_Raid", 00:15:23.155 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:23.155 "strip_size_kb": 64, 00:15:23.155 "state": "online", 00:15:23.155 "raid_level": "raid5f", 00:15:23.155 "superblock": true, 00:15:23.155 "num_base_bdevs": 3, 00:15:23.155 "num_base_bdevs_discovered": 3, 00:15:23.155 "num_base_bdevs_operational": 3, 00:15:23.155 "base_bdevs_list": [ 00:15:23.155 { 00:15:23.155 "name": "BaseBdev1", 00:15:23.155 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:23.155 "is_configured": true, 00:15:23.155 "data_offset": 2048, 00:15:23.155 "data_size": 63488 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "name": "BaseBdev2", 00:15:23.155 "uuid": "94206440-1193-4ebe-a426-e22a10867d7b", 00:15:23.155 "is_configured": true, 00:15:23.155 "data_offset": 2048, 00:15:23.155 "data_size": 63488 00:15:23.155 }, 00:15:23.155 { 00:15:23.155 "name": "BaseBdev3", 00:15:23.155 "uuid": "2b8e4520-a5a8-4dda-ba19-836ef278ad83", 00:15:23.155 "is_configured": true, 00:15:23.155 "data_offset": 2048, 00:15:23.155 "data_size": 63488 00:15:23.155 } 00:15:23.155 ] 00:15:23.155 }' 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.155 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 [2024-11-18 13:31:53.495939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.724 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.724 "name": "Existed_Raid", 00:15:23.724 "aliases": [ 00:15:23.724 "49026e00-8833-474d-895b-5b9863f3814a" 00:15:23.724 ], 00:15:23.724 "product_name": "Raid Volume", 00:15:23.724 "block_size": 512, 00:15:23.724 "num_blocks": 126976, 00:15:23.724 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:23.725 "assigned_rate_limits": { 00:15:23.725 "rw_ios_per_sec": 0, 00:15:23.725 "rw_mbytes_per_sec": 0, 00:15:23.725 "r_mbytes_per_sec": 0, 00:15:23.725 "w_mbytes_per_sec": 0 00:15:23.725 }, 00:15:23.725 "claimed": false, 00:15:23.725 "zoned": false, 00:15:23.725 "supported_io_types": { 00:15:23.725 "read": true, 00:15:23.725 "write": true, 00:15:23.725 "unmap": false, 00:15:23.725 "flush": false, 00:15:23.725 "reset": true, 00:15:23.725 "nvme_admin": false, 00:15:23.725 "nvme_io": false, 00:15:23.725 "nvme_io_md": false, 00:15:23.725 "write_zeroes": true, 00:15:23.725 "zcopy": false, 00:15:23.725 "get_zone_info": false, 00:15:23.725 "zone_management": false, 00:15:23.725 "zone_append": false, 00:15:23.725 "compare": false, 00:15:23.725 "compare_and_write": false, 00:15:23.725 "abort": false, 00:15:23.725 "seek_hole": false, 00:15:23.725 "seek_data": false, 00:15:23.725 "copy": false, 00:15:23.725 "nvme_iov_md": false 00:15:23.725 }, 00:15:23.725 "driver_specific": { 00:15:23.725 "raid": { 00:15:23.725 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:23.725 "strip_size_kb": 64, 00:15:23.725 "state": "online", 00:15:23.725 "raid_level": "raid5f", 00:15:23.725 "superblock": true, 00:15:23.725 "num_base_bdevs": 3, 00:15:23.725 "num_base_bdevs_discovered": 3, 00:15:23.725 "num_base_bdevs_operational": 3, 00:15:23.725 "base_bdevs_list": [ 00:15:23.725 { 00:15:23.725 "name": "BaseBdev1", 00:15:23.725 "uuid": "7b44d0ef-9002-4696-b6cb-d00382d42be3", 00:15:23.725 "is_configured": true, 00:15:23.725 "data_offset": 2048, 00:15:23.725 "data_size": 63488 00:15:23.725 }, 00:15:23.725 { 00:15:23.725 "name": "BaseBdev2", 00:15:23.725 "uuid": "94206440-1193-4ebe-a426-e22a10867d7b", 00:15:23.725 "is_configured": true, 00:15:23.725 "data_offset": 2048, 00:15:23.725 "data_size": 63488 00:15:23.725 }, 00:15:23.725 { 00:15:23.725 "name": "BaseBdev3", 00:15:23.725 "uuid": "2b8e4520-a5a8-4dda-ba19-836ef278ad83", 00:15:23.725 "is_configured": true, 00:15:23.725 "data_offset": 2048, 00:15:23.725 "data_size": 63488 00:15:23.725 } 00:15:23.725 ] 00:15:23.725 } 00:15:23.725 } 00:15:23.725 }' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:23.725 BaseBdev2 00:15:23.725 BaseBdev3' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.725 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.725 [2024-11-18 13:31:53.743371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.985 "name": "Existed_Raid", 00:15:23.985 "uuid": "49026e00-8833-474d-895b-5b9863f3814a", 00:15:23.985 "strip_size_kb": 64, 00:15:23.985 "state": "online", 00:15:23.985 "raid_level": "raid5f", 00:15:23.985 "superblock": true, 00:15:23.985 "num_base_bdevs": 3, 00:15:23.985 "num_base_bdevs_discovered": 2, 00:15:23.985 "num_base_bdevs_operational": 2, 00:15:23.985 "base_bdevs_list": [ 00:15:23.985 { 00:15:23.985 "name": null, 00:15:23.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.985 "is_configured": false, 00:15:23.985 "data_offset": 0, 00:15:23.985 "data_size": 63488 00:15:23.985 }, 00:15:23.985 { 00:15:23.985 "name": "BaseBdev2", 00:15:23.985 "uuid": "94206440-1193-4ebe-a426-e22a10867d7b", 00:15:23.985 "is_configured": true, 00:15:23.985 "data_offset": 2048, 00:15:23.985 "data_size": 63488 00:15:23.985 }, 00:15:23.985 { 00:15:23.985 "name": "BaseBdev3", 00:15:23.985 "uuid": "2b8e4520-a5a8-4dda-ba19-836ef278ad83", 00:15:23.985 "is_configured": true, 00:15:23.985 "data_offset": 2048, 00:15:23.985 "data_size": 63488 00:15:23.985 } 00:15:23.985 ] 00:15:23.985 }' 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.985 13:31:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 [2024-11-18 13:31:54.288459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.244 [2024-11-18 13:31:54.288603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.502 [2024-11-18 13:31:54.378975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.502 [2024-11-18 13:31:54.434877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.502 [2024-11-18 13:31:54.434921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.502 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 BaseBdev2 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 [ 00:15:24.762 { 00:15:24.762 "name": "BaseBdev2", 00:15:24.762 "aliases": [ 00:15:24.762 "7e38e5ab-864d-4727-a9a9-1a3bb1847556" 00:15:24.762 ], 00:15:24.762 "product_name": "Malloc disk", 00:15:24.762 "block_size": 512, 00:15:24.762 "num_blocks": 65536, 00:15:24.762 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:24.762 "assigned_rate_limits": { 00:15:24.762 "rw_ios_per_sec": 0, 00:15:24.762 "rw_mbytes_per_sec": 0, 00:15:24.762 "r_mbytes_per_sec": 0, 00:15:24.762 "w_mbytes_per_sec": 0 00:15:24.762 }, 00:15:24.762 "claimed": false, 00:15:24.762 "zoned": false, 00:15:24.762 "supported_io_types": { 00:15:24.762 "read": true, 00:15:24.762 "write": true, 00:15:24.762 "unmap": true, 00:15:24.762 "flush": true, 00:15:24.762 "reset": true, 00:15:24.762 "nvme_admin": false, 00:15:24.762 "nvme_io": false, 00:15:24.762 "nvme_io_md": false, 00:15:24.762 "write_zeroes": true, 00:15:24.762 "zcopy": true, 00:15:24.762 "get_zone_info": false, 00:15:24.762 "zone_management": false, 00:15:24.762 "zone_append": false, 00:15:24.762 "compare": false, 00:15:24.762 "compare_and_write": false, 00:15:24.762 "abort": true, 00:15:24.762 "seek_hole": false, 00:15:24.762 "seek_data": false, 00:15:24.762 "copy": true, 00:15:24.762 "nvme_iov_md": false 00:15:24.762 }, 00:15:24.762 "memory_domains": [ 00:15:24.762 { 00:15:24.763 "dma_device_id": "system", 00:15:24.763 "dma_device_type": 1 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.763 "dma_device_type": 2 00:15:24.763 } 00:15:24.763 ], 00:15:24.763 "driver_specific": {} 00:15:24.763 } 00:15:24.763 ] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 BaseBdev3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 [ 00:15:24.763 { 00:15:24.763 "name": "BaseBdev3", 00:15:24.763 "aliases": [ 00:15:24.763 "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9" 00:15:24.763 ], 00:15:24.763 "product_name": "Malloc disk", 00:15:24.763 "block_size": 512, 00:15:24.763 "num_blocks": 65536, 00:15:24.763 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:24.763 "assigned_rate_limits": { 00:15:24.763 "rw_ios_per_sec": 0, 00:15:24.763 "rw_mbytes_per_sec": 0, 00:15:24.763 "r_mbytes_per_sec": 0, 00:15:24.763 "w_mbytes_per_sec": 0 00:15:24.763 }, 00:15:24.763 "claimed": false, 00:15:24.763 "zoned": false, 00:15:24.763 "supported_io_types": { 00:15:24.763 "read": true, 00:15:24.763 "write": true, 00:15:24.763 "unmap": true, 00:15:24.763 "flush": true, 00:15:24.763 "reset": true, 00:15:24.763 "nvme_admin": false, 00:15:24.763 "nvme_io": false, 00:15:24.763 "nvme_io_md": false, 00:15:24.763 "write_zeroes": true, 00:15:24.763 "zcopy": true, 00:15:24.763 "get_zone_info": false, 00:15:24.763 "zone_management": false, 00:15:24.763 "zone_append": false, 00:15:24.763 "compare": false, 00:15:24.763 "compare_and_write": false, 00:15:24.763 "abort": true, 00:15:24.763 "seek_hole": false, 00:15:24.763 "seek_data": false, 00:15:24.763 "copy": true, 00:15:24.763 "nvme_iov_md": false 00:15:24.763 }, 00:15:24.763 "memory_domains": [ 00:15:24.763 { 00:15:24.763 "dma_device_id": "system", 00:15:24.763 "dma_device_type": 1 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.763 "dma_device_type": 2 00:15:24.763 } 00:15:24.763 ], 00:15:24.763 "driver_specific": {} 00:15:24.763 } 00:15:24.763 ] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 [2024-11-18 13:31:54.710927] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.763 [2024-11-18 13:31:54.710971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.763 [2024-11-18 13:31:54.710991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.763 [2024-11-18 13:31:54.712703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.763 "name": "Existed_Raid", 00:15:24.763 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:24.763 "strip_size_kb": 64, 00:15:24.763 "state": "configuring", 00:15:24.763 "raid_level": "raid5f", 00:15:24.763 "superblock": true, 00:15:24.763 "num_base_bdevs": 3, 00:15:24.763 "num_base_bdevs_discovered": 2, 00:15:24.763 "num_base_bdevs_operational": 3, 00:15:24.763 "base_bdevs_list": [ 00:15:24.763 { 00:15:24.763 "name": "BaseBdev1", 00:15:24.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.763 "is_configured": false, 00:15:24.763 "data_offset": 0, 00:15:24.763 "data_size": 0 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "name": "BaseBdev2", 00:15:24.763 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:24.763 "is_configured": true, 00:15:24.763 "data_offset": 2048, 00:15:24.763 "data_size": 63488 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "name": "BaseBdev3", 00:15:24.763 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:24.763 "is_configured": true, 00:15:24.763 "data_offset": 2048, 00:15:24.763 "data_size": 63488 00:15:24.763 } 00:15:24.763 ] 00:15:24.763 }' 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.763 13:31:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.332 [2024-11-18 13:31:55.162219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.332 "name": "Existed_Raid", 00:15:25.332 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:25.332 "strip_size_kb": 64, 00:15:25.332 "state": "configuring", 00:15:25.332 "raid_level": "raid5f", 00:15:25.332 "superblock": true, 00:15:25.332 "num_base_bdevs": 3, 00:15:25.332 "num_base_bdevs_discovered": 1, 00:15:25.332 "num_base_bdevs_operational": 3, 00:15:25.332 "base_bdevs_list": [ 00:15:25.332 { 00:15:25.332 "name": "BaseBdev1", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 0 00:15:25.332 }, 00:15:25.332 { 00:15:25.332 "name": null, 00:15:25.332 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 63488 00:15:25.332 }, 00:15:25.332 { 00:15:25.332 "name": "BaseBdev3", 00:15:25.332 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:25.332 "is_configured": true, 00:15:25.332 "data_offset": 2048, 00:15:25.332 "data_size": 63488 00:15:25.332 } 00:15:25.332 ] 00:15:25.332 }' 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.332 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.592 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.592 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.592 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.592 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.592 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.852 [2024-11-18 13:31:55.697890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.852 BaseBdev1 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.852 [ 00:15:25.852 { 00:15:25.852 "name": "BaseBdev1", 00:15:25.852 "aliases": [ 00:15:25.852 "ab51649c-2118-4d3f-9a31-d2c8387c76d3" 00:15:25.852 ], 00:15:25.852 "product_name": "Malloc disk", 00:15:25.852 "block_size": 512, 00:15:25.852 "num_blocks": 65536, 00:15:25.852 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:25.852 "assigned_rate_limits": { 00:15:25.852 "rw_ios_per_sec": 0, 00:15:25.852 "rw_mbytes_per_sec": 0, 00:15:25.852 "r_mbytes_per_sec": 0, 00:15:25.852 "w_mbytes_per_sec": 0 00:15:25.852 }, 00:15:25.852 "claimed": true, 00:15:25.852 "claim_type": "exclusive_write", 00:15:25.852 "zoned": false, 00:15:25.852 "supported_io_types": { 00:15:25.852 "read": true, 00:15:25.852 "write": true, 00:15:25.852 "unmap": true, 00:15:25.852 "flush": true, 00:15:25.852 "reset": true, 00:15:25.852 "nvme_admin": false, 00:15:25.852 "nvme_io": false, 00:15:25.852 "nvme_io_md": false, 00:15:25.852 "write_zeroes": true, 00:15:25.852 "zcopy": true, 00:15:25.852 "get_zone_info": false, 00:15:25.852 "zone_management": false, 00:15:25.852 "zone_append": false, 00:15:25.852 "compare": false, 00:15:25.852 "compare_and_write": false, 00:15:25.852 "abort": true, 00:15:25.852 "seek_hole": false, 00:15:25.852 "seek_data": false, 00:15:25.852 "copy": true, 00:15:25.852 "nvme_iov_md": false 00:15:25.852 }, 00:15:25.852 "memory_domains": [ 00:15:25.852 { 00:15:25.852 "dma_device_id": "system", 00:15:25.852 "dma_device_type": 1 00:15:25.852 }, 00:15:25.852 { 00:15:25.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.852 "dma_device_type": 2 00:15:25.852 } 00:15:25.852 ], 00:15:25.852 "driver_specific": {} 00:15:25.852 } 00:15:25.852 ] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.852 "name": "Existed_Raid", 00:15:25.852 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:25.852 "strip_size_kb": 64, 00:15:25.852 "state": "configuring", 00:15:25.852 "raid_level": "raid5f", 00:15:25.852 "superblock": true, 00:15:25.852 "num_base_bdevs": 3, 00:15:25.852 "num_base_bdevs_discovered": 2, 00:15:25.852 "num_base_bdevs_operational": 3, 00:15:25.852 "base_bdevs_list": [ 00:15:25.852 { 00:15:25.852 "name": "BaseBdev1", 00:15:25.852 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:25.852 "is_configured": true, 00:15:25.852 "data_offset": 2048, 00:15:25.852 "data_size": 63488 00:15:25.852 }, 00:15:25.852 { 00:15:25.852 "name": null, 00:15:25.852 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:25.852 "is_configured": false, 00:15:25.852 "data_offset": 0, 00:15:25.852 "data_size": 63488 00:15:25.852 }, 00:15:25.852 { 00:15:25.852 "name": "BaseBdev3", 00:15:25.852 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:25.852 "is_configured": true, 00:15:25.852 "data_offset": 2048, 00:15:25.852 "data_size": 63488 00:15:25.852 } 00:15:25.852 ] 00:15:25.852 }' 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.852 13:31:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.420 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.420 [2024-11-18 13:31:56.241056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.421 "name": "Existed_Raid", 00:15:26.421 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:26.421 "strip_size_kb": 64, 00:15:26.421 "state": "configuring", 00:15:26.421 "raid_level": "raid5f", 00:15:26.421 "superblock": true, 00:15:26.421 "num_base_bdevs": 3, 00:15:26.421 "num_base_bdevs_discovered": 1, 00:15:26.421 "num_base_bdevs_operational": 3, 00:15:26.421 "base_bdevs_list": [ 00:15:26.421 { 00:15:26.421 "name": "BaseBdev1", 00:15:26.421 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:26.421 "is_configured": true, 00:15:26.421 "data_offset": 2048, 00:15:26.421 "data_size": 63488 00:15:26.421 }, 00:15:26.421 { 00:15:26.421 "name": null, 00:15:26.421 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:26.421 "is_configured": false, 00:15:26.421 "data_offset": 0, 00:15:26.421 "data_size": 63488 00:15:26.421 }, 00:15:26.421 { 00:15:26.421 "name": null, 00:15:26.421 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:26.421 "is_configured": false, 00:15:26.421 "data_offset": 0, 00:15:26.421 "data_size": 63488 00:15:26.421 } 00:15:26.421 ] 00:15:26.421 }' 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.421 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.990 [2024-11-18 13:31:56.788133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.990 "name": "Existed_Raid", 00:15:26.990 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:26.990 "strip_size_kb": 64, 00:15:26.990 "state": "configuring", 00:15:26.990 "raid_level": "raid5f", 00:15:26.990 "superblock": true, 00:15:26.990 "num_base_bdevs": 3, 00:15:26.990 "num_base_bdevs_discovered": 2, 00:15:26.990 "num_base_bdevs_operational": 3, 00:15:26.990 "base_bdevs_list": [ 00:15:26.990 { 00:15:26.990 "name": "BaseBdev1", 00:15:26.990 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:26.990 "is_configured": true, 00:15:26.990 "data_offset": 2048, 00:15:26.990 "data_size": 63488 00:15:26.990 }, 00:15:26.990 { 00:15:26.990 "name": null, 00:15:26.990 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:26.990 "is_configured": false, 00:15:26.990 "data_offset": 0, 00:15:26.990 "data_size": 63488 00:15:26.990 }, 00:15:26.990 { 00:15:26.990 "name": "BaseBdev3", 00:15:26.990 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:26.990 "is_configured": true, 00:15:26.990 "data_offset": 2048, 00:15:26.990 "data_size": 63488 00:15:26.990 } 00:15:26.990 ] 00:15:26.990 }' 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.990 13:31:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.249 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.249 [2024-11-18 13:31:57.239375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.509 "name": "Existed_Raid", 00:15:27.509 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:27.509 "strip_size_kb": 64, 00:15:27.509 "state": "configuring", 00:15:27.509 "raid_level": "raid5f", 00:15:27.509 "superblock": true, 00:15:27.509 "num_base_bdevs": 3, 00:15:27.509 "num_base_bdevs_discovered": 1, 00:15:27.509 "num_base_bdevs_operational": 3, 00:15:27.509 "base_bdevs_list": [ 00:15:27.509 { 00:15:27.509 "name": null, 00:15:27.509 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:27.509 "is_configured": false, 00:15:27.509 "data_offset": 0, 00:15:27.509 "data_size": 63488 00:15:27.509 }, 00:15:27.509 { 00:15:27.509 "name": null, 00:15:27.509 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:27.509 "is_configured": false, 00:15:27.509 "data_offset": 0, 00:15:27.509 "data_size": 63488 00:15:27.509 }, 00:15:27.509 { 00:15:27.509 "name": "BaseBdev3", 00:15:27.509 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:27.509 "is_configured": true, 00:15:27.509 "data_offset": 2048, 00:15:27.509 "data_size": 63488 00:15:27.509 } 00:15:27.509 ] 00:15:27.509 }' 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.509 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.769 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.769 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:27.769 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.769 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.769 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.028 [2024-11-18 13:31:57.834954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.028 "name": "Existed_Raid", 00:15:28.028 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:28.028 "strip_size_kb": 64, 00:15:28.028 "state": "configuring", 00:15:28.028 "raid_level": "raid5f", 00:15:28.028 "superblock": true, 00:15:28.028 "num_base_bdevs": 3, 00:15:28.028 "num_base_bdevs_discovered": 2, 00:15:28.028 "num_base_bdevs_operational": 3, 00:15:28.028 "base_bdevs_list": [ 00:15:28.028 { 00:15:28.028 "name": null, 00:15:28.028 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:28.028 "is_configured": false, 00:15:28.028 "data_offset": 0, 00:15:28.028 "data_size": 63488 00:15:28.028 }, 00:15:28.028 { 00:15:28.028 "name": "BaseBdev2", 00:15:28.028 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:28.028 "is_configured": true, 00:15:28.028 "data_offset": 2048, 00:15:28.028 "data_size": 63488 00:15:28.028 }, 00:15:28.028 { 00:15:28.028 "name": "BaseBdev3", 00:15:28.028 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:28.028 "is_configured": true, 00:15:28.028 "data_offset": 2048, 00:15:28.028 "data_size": 63488 00:15:28.028 } 00:15:28.028 ] 00:15:28.028 }' 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.028 13:31:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.288 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.288 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.288 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.288 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.288 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab51649c-2118-4d3f-9a31-d2c8387c76d3 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.549 [2024-11-18 13:31:58.437706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:28.549 [2024-11-18 13:31:58.437920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:28.549 [2024-11-18 13:31:58.437946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:28.549 [2024-11-18 13:31:58.438195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:28.549 NewBaseBdev 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.549 [2024-11-18 13:31:58.443229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:28.549 [2024-11-18 13:31:58.443247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:28.549 [2024-11-18 13:31:58.443400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.549 [ 00:15:28.549 { 00:15:28.549 "name": "NewBaseBdev", 00:15:28.549 "aliases": [ 00:15:28.549 "ab51649c-2118-4d3f-9a31-d2c8387c76d3" 00:15:28.549 ], 00:15:28.549 "product_name": "Malloc disk", 00:15:28.549 "block_size": 512, 00:15:28.549 "num_blocks": 65536, 00:15:28.549 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:28.549 "assigned_rate_limits": { 00:15:28.549 "rw_ios_per_sec": 0, 00:15:28.549 "rw_mbytes_per_sec": 0, 00:15:28.549 "r_mbytes_per_sec": 0, 00:15:28.549 "w_mbytes_per_sec": 0 00:15:28.549 }, 00:15:28.549 "claimed": true, 00:15:28.549 "claim_type": "exclusive_write", 00:15:28.549 "zoned": false, 00:15:28.549 "supported_io_types": { 00:15:28.549 "read": true, 00:15:28.549 "write": true, 00:15:28.549 "unmap": true, 00:15:28.549 "flush": true, 00:15:28.549 "reset": true, 00:15:28.549 "nvme_admin": false, 00:15:28.549 "nvme_io": false, 00:15:28.549 "nvme_io_md": false, 00:15:28.549 "write_zeroes": true, 00:15:28.549 "zcopy": true, 00:15:28.549 "get_zone_info": false, 00:15:28.549 "zone_management": false, 00:15:28.549 "zone_append": false, 00:15:28.549 "compare": false, 00:15:28.549 "compare_and_write": false, 00:15:28.549 "abort": true, 00:15:28.549 "seek_hole": false, 00:15:28.549 "seek_data": false, 00:15:28.549 "copy": true, 00:15:28.549 "nvme_iov_md": false 00:15:28.549 }, 00:15:28.549 "memory_domains": [ 00:15:28.549 { 00:15:28.549 "dma_device_id": "system", 00:15:28.549 "dma_device_type": 1 00:15:28.549 }, 00:15:28.549 { 00:15:28.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.549 "dma_device_type": 2 00:15:28.549 } 00:15:28.549 ], 00:15:28.549 "driver_specific": {} 00:15:28.549 } 00:15:28.549 ] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.549 "name": "Existed_Raid", 00:15:28.549 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:28.549 "strip_size_kb": 64, 00:15:28.549 "state": "online", 00:15:28.549 "raid_level": "raid5f", 00:15:28.549 "superblock": true, 00:15:28.549 "num_base_bdevs": 3, 00:15:28.549 "num_base_bdevs_discovered": 3, 00:15:28.549 "num_base_bdevs_operational": 3, 00:15:28.549 "base_bdevs_list": [ 00:15:28.549 { 00:15:28.549 "name": "NewBaseBdev", 00:15:28.549 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:28.549 "is_configured": true, 00:15:28.549 "data_offset": 2048, 00:15:28.549 "data_size": 63488 00:15:28.549 }, 00:15:28.549 { 00:15:28.549 "name": "BaseBdev2", 00:15:28.549 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:28.549 "is_configured": true, 00:15:28.549 "data_offset": 2048, 00:15:28.549 "data_size": 63488 00:15:28.549 }, 00:15:28.549 { 00:15:28.549 "name": "BaseBdev3", 00:15:28.549 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:28.549 "is_configured": true, 00:15:28.549 "data_offset": 2048, 00:15:28.549 "data_size": 63488 00:15:28.549 } 00:15:28.549 ] 00:15:28.549 }' 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.549 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.119 [2024-11-18 13:31:58.900753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.119 "name": "Existed_Raid", 00:15:29.119 "aliases": [ 00:15:29.119 "4bdf83cb-0be0-4232-9798-284ec05c9e98" 00:15:29.119 ], 00:15:29.119 "product_name": "Raid Volume", 00:15:29.119 "block_size": 512, 00:15:29.119 "num_blocks": 126976, 00:15:29.119 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:29.119 "assigned_rate_limits": { 00:15:29.119 "rw_ios_per_sec": 0, 00:15:29.119 "rw_mbytes_per_sec": 0, 00:15:29.119 "r_mbytes_per_sec": 0, 00:15:29.119 "w_mbytes_per_sec": 0 00:15:29.119 }, 00:15:29.119 "claimed": false, 00:15:29.119 "zoned": false, 00:15:29.119 "supported_io_types": { 00:15:29.119 "read": true, 00:15:29.119 "write": true, 00:15:29.119 "unmap": false, 00:15:29.119 "flush": false, 00:15:29.119 "reset": true, 00:15:29.119 "nvme_admin": false, 00:15:29.119 "nvme_io": false, 00:15:29.119 "nvme_io_md": false, 00:15:29.119 "write_zeroes": true, 00:15:29.119 "zcopy": false, 00:15:29.119 "get_zone_info": false, 00:15:29.119 "zone_management": false, 00:15:29.119 "zone_append": false, 00:15:29.119 "compare": false, 00:15:29.119 "compare_and_write": false, 00:15:29.119 "abort": false, 00:15:29.119 "seek_hole": false, 00:15:29.119 "seek_data": false, 00:15:29.119 "copy": false, 00:15:29.119 "nvme_iov_md": false 00:15:29.119 }, 00:15:29.119 "driver_specific": { 00:15:29.119 "raid": { 00:15:29.119 "uuid": "4bdf83cb-0be0-4232-9798-284ec05c9e98", 00:15:29.119 "strip_size_kb": 64, 00:15:29.119 "state": "online", 00:15:29.119 "raid_level": "raid5f", 00:15:29.119 "superblock": true, 00:15:29.119 "num_base_bdevs": 3, 00:15:29.119 "num_base_bdevs_discovered": 3, 00:15:29.119 "num_base_bdevs_operational": 3, 00:15:29.119 "base_bdevs_list": [ 00:15:29.119 { 00:15:29.119 "name": "NewBaseBdev", 00:15:29.119 "uuid": "ab51649c-2118-4d3f-9a31-d2c8387c76d3", 00:15:29.119 "is_configured": true, 00:15:29.119 "data_offset": 2048, 00:15:29.119 "data_size": 63488 00:15:29.119 }, 00:15:29.119 { 00:15:29.119 "name": "BaseBdev2", 00:15:29.119 "uuid": "7e38e5ab-864d-4727-a9a9-1a3bb1847556", 00:15:29.119 "is_configured": true, 00:15:29.119 "data_offset": 2048, 00:15:29.119 "data_size": 63488 00:15:29.119 }, 00:15:29.119 { 00:15:29.119 "name": "BaseBdev3", 00:15:29.119 "uuid": "e7ce9a1e-cc2f-48cf-9934-65b7c3b660d9", 00:15:29.119 "is_configured": true, 00:15:29.119 "data_offset": 2048, 00:15:29.119 "data_size": 63488 00:15:29.119 } 00:15:29.119 ] 00:15:29.119 } 00:15:29.119 } 00:15:29.119 }' 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:29.119 BaseBdev2 00:15:29.119 BaseBdev3' 00:15:29.119 13:31:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.119 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.378 [2024-11-18 13:31:59.184124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.378 [2024-11-18 13:31:59.184159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.378 [2024-11-18 13:31:59.184221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.378 [2024-11-18 13:31:59.184481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.378 [2024-11-18 13:31:59.184501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80456 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80456 ']' 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80456 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80456 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.378 killing process with pid 80456 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80456' 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80456 00:15:29.378 [2024-11-18 13:31:59.235663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.378 13:31:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80456 00:15:29.637 [2024-11-18 13:31:59.523355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.575 13:32:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:30.575 00:15:30.575 real 0m10.494s 00:15:30.575 user 0m16.693s 00:15:30.575 sys 0m1.968s 00:15:30.575 13:32:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.575 13:32:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.575 ************************************ 00:15:30.575 END TEST raid5f_state_function_test_sb 00:15:30.575 ************************************ 00:15:30.836 13:32:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:30.836 13:32:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:30.836 13:32:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.836 13:32:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.836 ************************************ 00:15:30.836 START TEST raid5f_superblock_test 00:15:30.836 ************************************ 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81078 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81078 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81078 ']' 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.836 13:32:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.836 [2024-11-18 13:32:00.757799] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:30.836 [2024-11-18 13:32:00.757941] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81078 ] 00:15:31.096 [2024-11-18 13:32:00.941480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.096 [2024-11-18 13:32:01.046211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.365 [2024-11-18 13:32:01.234543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.365 [2024-11-18 13:32:01.234597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.635 malloc1 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.635 [2024-11-18 13:32:01.620311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.635 [2024-11-18 13:32:01.620369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.635 [2024-11-18 13:32:01.620391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.635 [2024-11-18 13:32:01.620400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.635 [2024-11-18 13:32:01.622338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.635 [2024-11-18 13:32:01.622376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.635 pt1 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.635 malloc2 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.635 [2024-11-18 13:32:01.675847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.635 [2024-11-18 13:32:01.675899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.635 [2024-11-18 13:32:01.675932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.635 [2024-11-18 13:32:01.675940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.635 [2024-11-18 13:32:01.677871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.635 [2024-11-18 13:32:01.677905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.635 pt2 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.635 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.895 malloc3 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.895 [2024-11-18 13:32:01.741761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:31.895 [2024-11-18 13:32:01.741809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.895 [2024-11-18 13:32:01.741828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:31.895 [2024-11-18 13:32:01.741836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.895 [2024-11-18 13:32:01.743826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.895 [2024-11-18 13:32:01.743872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:31.895 pt3 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.895 [2024-11-18 13:32:01.753776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.895 [2024-11-18 13:32:01.755470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.895 [2024-11-18 13:32:01.755532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:31.895 [2024-11-18 13:32:01.755699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:31.895 [2024-11-18 13:32:01.755733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:31.895 [2024-11-18 13:32:01.755952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:31.895 [2024-11-18 13:32:01.761493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:31.895 [2024-11-18 13:32:01.761515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:31.895 [2024-11-18 13:32:01.761684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.895 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.895 "name": "raid_bdev1", 00:15:31.895 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:31.895 "strip_size_kb": 64, 00:15:31.895 "state": "online", 00:15:31.895 "raid_level": "raid5f", 00:15:31.895 "superblock": true, 00:15:31.895 "num_base_bdevs": 3, 00:15:31.896 "num_base_bdevs_discovered": 3, 00:15:31.896 "num_base_bdevs_operational": 3, 00:15:31.896 "base_bdevs_list": [ 00:15:31.896 { 00:15:31.896 "name": "pt1", 00:15:31.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.896 "is_configured": true, 00:15:31.896 "data_offset": 2048, 00:15:31.896 "data_size": 63488 00:15:31.896 }, 00:15:31.896 { 00:15:31.896 "name": "pt2", 00:15:31.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.896 "is_configured": true, 00:15:31.896 "data_offset": 2048, 00:15:31.896 "data_size": 63488 00:15:31.896 }, 00:15:31.896 { 00:15:31.896 "name": "pt3", 00:15:31.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.896 "is_configured": true, 00:15:31.896 "data_offset": 2048, 00:15:31.896 "data_size": 63488 00:15:31.896 } 00:15:31.896 ] 00:15:31.896 }' 00:15:31.896 13:32:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.896 13:32:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.155 [2024-11-18 13:32:02.155291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.155 "name": "raid_bdev1", 00:15:32.155 "aliases": [ 00:15:32.155 "cdd40d35-f7ea-4355-ba55-fde8b24a856a" 00:15:32.155 ], 00:15:32.155 "product_name": "Raid Volume", 00:15:32.155 "block_size": 512, 00:15:32.155 "num_blocks": 126976, 00:15:32.155 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:32.155 "assigned_rate_limits": { 00:15:32.155 "rw_ios_per_sec": 0, 00:15:32.155 "rw_mbytes_per_sec": 0, 00:15:32.155 "r_mbytes_per_sec": 0, 00:15:32.155 "w_mbytes_per_sec": 0 00:15:32.155 }, 00:15:32.155 "claimed": false, 00:15:32.155 "zoned": false, 00:15:32.155 "supported_io_types": { 00:15:32.155 "read": true, 00:15:32.155 "write": true, 00:15:32.155 "unmap": false, 00:15:32.155 "flush": false, 00:15:32.155 "reset": true, 00:15:32.155 "nvme_admin": false, 00:15:32.155 "nvme_io": false, 00:15:32.155 "nvme_io_md": false, 00:15:32.155 "write_zeroes": true, 00:15:32.155 "zcopy": false, 00:15:32.155 "get_zone_info": false, 00:15:32.155 "zone_management": false, 00:15:32.155 "zone_append": false, 00:15:32.155 "compare": false, 00:15:32.155 "compare_and_write": false, 00:15:32.155 "abort": false, 00:15:32.155 "seek_hole": false, 00:15:32.155 "seek_data": false, 00:15:32.155 "copy": false, 00:15:32.155 "nvme_iov_md": false 00:15:32.155 }, 00:15:32.155 "driver_specific": { 00:15:32.155 "raid": { 00:15:32.155 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:32.155 "strip_size_kb": 64, 00:15:32.155 "state": "online", 00:15:32.155 "raid_level": "raid5f", 00:15:32.155 "superblock": true, 00:15:32.155 "num_base_bdevs": 3, 00:15:32.155 "num_base_bdevs_discovered": 3, 00:15:32.155 "num_base_bdevs_operational": 3, 00:15:32.155 "base_bdevs_list": [ 00:15:32.155 { 00:15:32.155 "name": "pt1", 00:15:32.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.155 "is_configured": true, 00:15:32.155 "data_offset": 2048, 00:15:32.155 "data_size": 63488 00:15:32.155 }, 00:15:32.155 { 00:15:32.155 "name": "pt2", 00:15:32.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.155 "is_configured": true, 00:15:32.155 "data_offset": 2048, 00:15:32.155 "data_size": 63488 00:15:32.155 }, 00:15:32.155 { 00:15:32.155 "name": "pt3", 00:15:32.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.155 "is_configured": true, 00:15:32.155 "data_offset": 2048, 00:15:32.155 "data_size": 63488 00:15:32.155 } 00:15:32.155 ] 00:15:32.155 } 00:15:32.155 } 00:15:32.155 }' 00:15:32.155 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:32.414 pt2 00:15:32.414 pt3' 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.414 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 [2024-11-18 13:32:02.414909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cdd40d35-f7ea-4355-ba55-fde8b24a856a 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cdd40d35-f7ea-4355-ba55-fde8b24a856a ']' 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.415 [2024-11-18 13:32:02.458751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.415 [2024-11-18 13:32:02.458778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.415 [2024-11-18 13:32:02.458844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.415 [2024-11-18 13:32:02.458912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.415 [2024-11-18 13:32:02.458929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:32.415 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 [2024-11-18 13:32:02.614524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.675 [2024-11-18 13:32:02.616318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.675 [2024-11-18 13:32:02.616369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:32.675 [2024-11-18 13:32:02.616426] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:32.675 [2024-11-18 13:32:02.616465] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:32.675 [2024-11-18 13:32:02.616482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:32.675 [2024-11-18 13:32:02.616497] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.675 [2024-11-18 13:32:02.616505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:32.675 request: 00:15:32.675 { 00:15:32.675 "name": "raid_bdev1", 00:15:32.675 "raid_level": "raid5f", 00:15:32.675 "base_bdevs": [ 00:15:32.675 "malloc1", 00:15:32.675 "malloc2", 00:15:32.675 "malloc3" 00:15:32.675 ], 00:15:32.675 "strip_size_kb": 64, 00:15:32.675 "superblock": false, 00:15:32.675 "method": "bdev_raid_create", 00:15:32.675 "req_id": 1 00:15:32.675 } 00:15:32.675 Got JSON-RPC error response 00:15:32.675 response: 00:15:32.675 { 00:15:32.675 "code": -17, 00:15:32.675 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.675 } 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 [2024-11-18 13:32:02.678370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.675 [2024-11-18 13:32:02.678416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.675 [2024-11-18 13:32:02.678432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:32.675 [2024-11-18 13:32:02.678440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.675 [2024-11-18 13:32:02.680517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.675 [2024-11-18 13:32:02.680552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.675 [2024-11-18 13:32:02.680620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:32.675 [2024-11-18 13:32:02.680664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.675 pt1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.935 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.935 "name": "raid_bdev1", 00:15:32.935 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:32.935 "strip_size_kb": 64, 00:15:32.935 "state": "configuring", 00:15:32.935 "raid_level": "raid5f", 00:15:32.935 "superblock": true, 00:15:32.935 "num_base_bdevs": 3, 00:15:32.935 "num_base_bdevs_discovered": 1, 00:15:32.935 "num_base_bdevs_operational": 3, 00:15:32.935 "base_bdevs_list": [ 00:15:32.935 { 00:15:32.935 "name": "pt1", 00:15:32.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.935 "is_configured": true, 00:15:32.935 "data_offset": 2048, 00:15:32.935 "data_size": 63488 00:15:32.935 }, 00:15:32.935 { 00:15:32.935 "name": null, 00:15:32.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.935 "is_configured": false, 00:15:32.935 "data_offset": 2048, 00:15:32.935 "data_size": 63488 00:15:32.935 }, 00:15:32.935 { 00:15:32.935 "name": null, 00:15:32.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:32.935 "is_configured": false, 00:15:32.935 "data_offset": 2048, 00:15:32.935 "data_size": 63488 00:15:32.935 } 00:15:32.935 ] 00:15:32.935 }' 00:15:32.935 13:32:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.935 13:32:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.195 [2024-11-18 13:32:03.165541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.195 [2024-11-18 13:32:03.165597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.195 [2024-11-18 13:32:03.165616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:33.195 [2024-11-18 13:32:03.165625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.195 [2024-11-18 13:32:03.166010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.195 [2024-11-18 13:32:03.166050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.195 [2024-11-18 13:32:03.166140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.195 [2024-11-18 13:32:03.166160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.195 pt2 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.195 [2024-11-18 13:32:03.177525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.195 "name": "raid_bdev1", 00:15:33.195 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:33.195 "strip_size_kb": 64, 00:15:33.195 "state": "configuring", 00:15:33.195 "raid_level": "raid5f", 00:15:33.195 "superblock": true, 00:15:33.195 "num_base_bdevs": 3, 00:15:33.195 "num_base_bdevs_discovered": 1, 00:15:33.195 "num_base_bdevs_operational": 3, 00:15:33.195 "base_bdevs_list": [ 00:15:33.195 { 00:15:33.195 "name": "pt1", 00:15:33.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.195 "is_configured": true, 00:15:33.195 "data_offset": 2048, 00:15:33.195 "data_size": 63488 00:15:33.195 }, 00:15:33.195 { 00:15:33.195 "name": null, 00:15:33.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.195 "is_configured": false, 00:15:33.195 "data_offset": 0, 00:15:33.195 "data_size": 63488 00:15:33.195 }, 00:15:33.195 { 00:15:33.195 "name": null, 00:15:33.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.195 "is_configured": false, 00:15:33.195 "data_offset": 2048, 00:15:33.195 "data_size": 63488 00:15:33.195 } 00:15:33.195 ] 00:15:33.195 }' 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.195 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.763 [2024-11-18 13:32:03.624730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.763 [2024-11-18 13:32:03.624791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.763 [2024-11-18 13:32:03.624808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:33.763 [2024-11-18 13:32:03.624818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.763 [2024-11-18 13:32:03.625244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.763 [2024-11-18 13:32:03.625273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.763 [2024-11-18 13:32:03.625344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:33.763 [2024-11-18 13:32:03.625365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.763 pt2 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.763 [2024-11-18 13:32:03.636698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:33.763 [2024-11-18 13:32:03.636741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.763 [2024-11-18 13:32:03.636753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:33.763 [2024-11-18 13:32:03.636763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.763 [2024-11-18 13:32:03.637086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.763 [2024-11-18 13:32:03.637119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:33.763 [2024-11-18 13:32:03.637182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:33.763 [2024-11-18 13:32:03.637201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:33.763 [2024-11-18 13:32:03.637322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:33.763 [2024-11-18 13:32:03.637339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.763 [2024-11-18 13:32:03.637557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:33.763 [2024-11-18 13:32:03.642918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:33.763 [2024-11-18 13:32:03.642941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:33.763 [2024-11-18 13:32:03.643109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.763 pt3 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.763 "name": "raid_bdev1", 00:15:33.763 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:33.763 "strip_size_kb": 64, 00:15:33.763 "state": "online", 00:15:33.763 "raid_level": "raid5f", 00:15:33.763 "superblock": true, 00:15:33.763 "num_base_bdevs": 3, 00:15:33.763 "num_base_bdevs_discovered": 3, 00:15:33.763 "num_base_bdevs_operational": 3, 00:15:33.763 "base_bdevs_list": [ 00:15:33.763 { 00:15:33.763 "name": "pt1", 00:15:33.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.763 "is_configured": true, 00:15:33.763 "data_offset": 2048, 00:15:33.763 "data_size": 63488 00:15:33.763 }, 00:15:33.763 { 00:15:33.763 "name": "pt2", 00:15:33.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.763 "is_configured": true, 00:15:33.763 "data_offset": 2048, 00:15:33.763 "data_size": 63488 00:15:33.763 }, 00:15:33.763 { 00:15:33.763 "name": "pt3", 00:15:33.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.763 "is_configured": true, 00:15:33.763 "data_offset": 2048, 00:15:33.763 "data_size": 63488 00:15:33.763 } 00:15:33.763 ] 00:15:33.763 }' 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.763 13:32:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.023 [2024-11-18 13:32:04.040742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.023 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.282 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.283 "name": "raid_bdev1", 00:15:34.283 "aliases": [ 00:15:34.283 "cdd40d35-f7ea-4355-ba55-fde8b24a856a" 00:15:34.283 ], 00:15:34.283 "product_name": "Raid Volume", 00:15:34.283 "block_size": 512, 00:15:34.283 "num_blocks": 126976, 00:15:34.283 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:34.283 "assigned_rate_limits": { 00:15:34.283 "rw_ios_per_sec": 0, 00:15:34.283 "rw_mbytes_per_sec": 0, 00:15:34.283 "r_mbytes_per_sec": 0, 00:15:34.283 "w_mbytes_per_sec": 0 00:15:34.283 }, 00:15:34.283 "claimed": false, 00:15:34.283 "zoned": false, 00:15:34.283 "supported_io_types": { 00:15:34.283 "read": true, 00:15:34.283 "write": true, 00:15:34.283 "unmap": false, 00:15:34.283 "flush": false, 00:15:34.283 "reset": true, 00:15:34.283 "nvme_admin": false, 00:15:34.283 "nvme_io": false, 00:15:34.283 "nvme_io_md": false, 00:15:34.283 "write_zeroes": true, 00:15:34.283 "zcopy": false, 00:15:34.283 "get_zone_info": false, 00:15:34.283 "zone_management": false, 00:15:34.283 "zone_append": false, 00:15:34.283 "compare": false, 00:15:34.283 "compare_and_write": false, 00:15:34.283 "abort": false, 00:15:34.283 "seek_hole": false, 00:15:34.283 "seek_data": false, 00:15:34.283 "copy": false, 00:15:34.283 "nvme_iov_md": false 00:15:34.283 }, 00:15:34.283 "driver_specific": { 00:15:34.283 "raid": { 00:15:34.283 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:34.283 "strip_size_kb": 64, 00:15:34.283 "state": "online", 00:15:34.283 "raid_level": "raid5f", 00:15:34.283 "superblock": true, 00:15:34.283 "num_base_bdevs": 3, 00:15:34.283 "num_base_bdevs_discovered": 3, 00:15:34.283 "num_base_bdevs_operational": 3, 00:15:34.283 "base_bdevs_list": [ 00:15:34.283 { 00:15:34.283 "name": "pt1", 00:15:34.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.283 "is_configured": true, 00:15:34.283 "data_offset": 2048, 00:15:34.283 "data_size": 63488 00:15:34.283 }, 00:15:34.283 { 00:15:34.283 "name": "pt2", 00:15:34.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.283 "is_configured": true, 00:15:34.283 "data_offset": 2048, 00:15:34.283 "data_size": 63488 00:15:34.283 }, 00:15:34.283 { 00:15:34.283 "name": "pt3", 00:15:34.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.283 "is_configured": true, 00:15:34.283 "data_offset": 2048, 00:15:34.283 "data_size": 63488 00:15:34.283 } 00:15:34.283 ] 00:15:34.283 } 00:15:34.283 } 00:15:34.283 }' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:34.283 pt2 00:15:34.283 pt3' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.283 [2024-11-18 13:32:04.300277] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.283 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cdd40d35-f7ea-4355-ba55-fde8b24a856a '!=' cdd40d35-f7ea-4355-ba55-fde8b24a856a ']' 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.543 [2024-11-18 13:32:04.344080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.543 "name": "raid_bdev1", 00:15:34.543 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:34.543 "strip_size_kb": 64, 00:15:34.543 "state": "online", 00:15:34.543 "raid_level": "raid5f", 00:15:34.543 "superblock": true, 00:15:34.543 "num_base_bdevs": 3, 00:15:34.543 "num_base_bdevs_discovered": 2, 00:15:34.543 "num_base_bdevs_operational": 2, 00:15:34.543 "base_bdevs_list": [ 00:15:34.543 { 00:15:34.543 "name": null, 00:15:34.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.543 "is_configured": false, 00:15:34.543 "data_offset": 0, 00:15:34.543 "data_size": 63488 00:15:34.543 }, 00:15:34.543 { 00:15:34.543 "name": "pt2", 00:15:34.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.543 "is_configured": true, 00:15:34.543 "data_offset": 2048, 00:15:34.543 "data_size": 63488 00:15:34.543 }, 00:15:34.543 { 00:15:34.543 "name": "pt3", 00:15:34.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:34.543 "is_configured": true, 00:15:34.543 "data_offset": 2048, 00:15:34.543 "data_size": 63488 00:15:34.543 } 00:15:34.543 ] 00:15:34.543 }' 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.543 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.802 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.803 [2024-11-18 13:32:04.763287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.803 [2024-11-18 13:32:04.763312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.803 [2024-11-18 13:32:04.763363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.803 [2024-11-18 13:32:04.763410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.803 [2024-11-18 13:32:04.763422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.803 [2024-11-18 13:32:04.847157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.803 [2024-11-18 13:32:04.847203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.803 [2024-11-18 13:32:04.847217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:34.803 [2024-11-18 13:32:04.847227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.803 [2024-11-18 13:32:04.849276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.803 [2024-11-18 13:32:04.849313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.803 [2024-11-18 13:32:04.849379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.803 [2024-11-18 13:32:04.849425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.803 pt2 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.803 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.063 "name": "raid_bdev1", 00:15:35.063 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:35.063 "strip_size_kb": 64, 00:15:35.063 "state": "configuring", 00:15:35.063 "raid_level": "raid5f", 00:15:35.063 "superblock": true, 00:15:35.063 "num_base_bdevs": 3, 00:15:35.063 "num_base_bdevs_discovered": 1, 00:15:35.063 "num_base_bdevs_operational": 2, 00:15:35.063 "base_bdevs_list": [ 00:15:35.063 { 00:15:35.063 "name": null, 00:15:35.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.063 "is_configured": false, 00:15:35.063 "data_offset": 2048, 00:15:35.063 "data_size": 63488 00:15:35.063 }, 00:15:35.063 { 00:15:35.063 "name": "pt2", 00:15:35.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.063 "is_configured": true, 00:15:35.063 "data_offset": 2048, 00:15:35.063 "data_size": 63488 00:15:35.063 }, 00:15:35.063 { 00:15:35.063 "name": null, 00:15:35.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.063 "is_configured": false, 00:15:35.063 "data_offset": 2048, 00:15:35.063 "data_size": 63488 00:15:35.063 } 00:15:35.063 ] 00:15:35.063 }' 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.063 13:32:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.322 [2024-11-18 13:32:05.286390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:35.322 [2024-11-18 13:32:05.286440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.322 [2024-11-18 13:32:05.286458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:35.322 [2024-11-18 13:32:05.286467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.322 [2024-11-18 13:32:05.286870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.322 [2024-11-18 13:32:05.286900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:35.322 [2024-11-18 13:32:05.286966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:35.322 [2024-11-18 13:32:05.286995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:35.322 [2024-11-18 13:32:05.287118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:35.322 [2024-11-18 13:32:05.287158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:35.322 [2024-11-18 13:32:05.287391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:35.322 [2024-11-18 13:32:05.292273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:35.322 [2024-11-18 13:32:05.292294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:35.322 [2024-11-18 13:32:05.292553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.322 pt3 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.322 "name": "raid_bdev1", 00:15:35.322 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:35.322 "strip_size_kb": 64, 00:15:35.322 "state": "online", 00:15:35.322 "raid_level": "raid5f", 00:15:35.322 "superblock": true, 00:15:35.322 "num_base_bdevs": 3, 00:15:35.322 "num_base_bdevs_discovered": 2, 00:15:35.322 "num_base_bdevs_operational": 2, 00:15:35.322 "base_bdevs_list": [ 00:15:35.322 { 00:15:35.322 "name": null, 00:15:35.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.322 "is_configured": false, 00:15:35.322 "data_offset": 2048, 00:15:35.322 "data_size": 63488 00:15:35.322 }, 00:15:35.322 { 00:15:35.322 "name": "pt2", 00:15:35.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.322 "is_configured": true, 00:15:35.322 "data_offset": 2048, 00:15:35.322 "data_size": 63488 00:15:35.322 }, 00:15:35.322 { 00:15:35.322 "name": "pt3", 00:15:35.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.322 "is_configured": true, 00:15:35.322 "data_offset": 2048, 00:15:35.322 "data_size": 63488 00:15:35.322 } 00:15:35.322 ] 00:15:35.322 }' 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.322 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 [2024-11-18 13:32:05.737776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.892 [2024-11-18 13:32:05.737802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.892 [2024-11-18 13:32:05.737852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.892 [2024-11-18 13:32:05.737897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.892 [2024-11-18 13:32:05.737906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 [2024-11-18 13:32:05.801693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.892 [2024-11-18 13:32:05.801739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.892 [2024-11-18 13:32:05.801754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:35.892 [2024-11-18 13:32:05.801762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.892 [2024-11-18 13:32:05.803840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.892 [2024-11-18 13:32:05.803875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.892 [2024-11-18 13:32:05.803933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.892 [2024-11-18 13:32:05.803972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.892 [2024-11-18 13:32:05.804080] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:35.892 [2024-11-18 13:32:05.804090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.892 [2024-11-18 13:32:05.804104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:35.892 [2024-11-18 13:32:05.804179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.892 pt1 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.892 "name": "raid_bdev1", 00:15:35.892 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:35.892 "strip_size_kb": 64, 00:15:35.892 "state": "configuring", 00:15:35.892 "raid_level": "raid5f", 00:15:35.892 "superblock": true, 00:15:35.892 "num_base_bdevs": 3, 00:15:35.892 "num_base_bdevs_discovered": 1, 00:15:35.892 "num_base_bdevs_operational": 2, 00:15:35.892 "base_bdevs_list": [ 00:15:35.892 { 00:15:35.892 "name": null, 00:15:35.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.892 "is_configured": false, 00:15:35.892 "data_offset": 2048, 00:15:35.892 "data_size": 63488 00:15:35.892 }, 00:15:35.892 { 00:15:35.892 "name": "pt2", 00:15:35.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.892 "is_configured": true, 00:15:35.892 "data_offset": 2048, 00:15:35.892 "data_size": 63488 00:15:35.892 }, 00:15:35.892 { 00:15:35.892 "name": null, 00:15:35.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.892 "is_configured": false, 00:15:35.892 "data_offset": 2048, 00:15:35.892 "data_size": 63488 00:15:35.892 } 00:15:35.892 ] 00:15:35.892 }' 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.892 13:32:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.463 [2024-11-18 13:32:06.332784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.463 [2024-11-18 13:32:06.332831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.463 [2024-11-18 13:32:06.332847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:36.463 [2024-11-18 13:32:06.332855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.463 [2024-11-18 13:32:06.333222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.463 [2024-11-18 13:32:06.333248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.463 [2024-11-18 13:32:06.333310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:36.463 [2024-11-18 13:32:06.333329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.463 [2024-11-18 13:32:06.333446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:36.463 [2024-11-18 13:32:06.333461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.463 [2024-11-18 13:32:06.333686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:36.463 [2024-11-18 13:32:06.338903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:36.463 [2024-11-18 13:32:06.338928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:36.463 [2024-11-18 13:32:06.339156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.463 pt3 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.463 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.463 "name": "raid_bdev1", 00:15:36.463 "uuid": "cdd40d35-f7ea-4355-ba55-fde8b24a856a", 00:15:36.463 "strip_size_kb": 64, 00:15:36.463 "state": "online", 00:15:36.463 "raid_level": "raid5f", 00:15:36.463 "superblock": true, 00:15:36.463 "num_base_bdevs": 3, 00:15:36.463 "num_base_bdevs_discovered": 2, 00:15:36.463 "num_base_bdevs_operational": 2, 00:15:36.463 "base_bdevs_list": [ 00:15:36.463 { 00:15:36.463 "name": null, 00:15:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.463 "is_configured": false, 00:15:36.463 "data_offset": 2048, 00:15:36.463 "data_size": 63488 00:15:36.463 }, 00:15:36.463 { 00:15:36.463 "name": "pt2", 00:15:36.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.463 "is_configured": true, 00:15:36.464 "data_offset": 2048, 00:15:36.464 "data_size": 63488 00:15:36.464 }, 00:15:36.464 { 00:15:36.464 "name": "pt3", 00:15:36.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.464 "is_configured": true, 00:15:36.464 "data_offset": 2048, 00:15:36.464 "data_size": 63488 00:15:36.464 } 00:15:36.464 ] 00:15:36.464 }' 00:15:36.464 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.464 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.723 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:36.723 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:36.723 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.723 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.723 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.983 [2024-11-18 13:32:06.804493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cdd40d35-f7ea-4355-ba55-fde8b24a856a '!=' cdd40d35-f7ea-4355-ba55-fde8b24a856a ']' 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81078 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81078 ']' 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81078 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81078 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.983 killing process with pid 81078 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81078' 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81078 00:15:36.983 [2024-11-18 13:32:06.872867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.983 [2024-11-18 13:32:06.872956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.983 [2024-11-18 13:32:06.873019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.983 [2024-11-18 13:32:06.873031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:36.983 13:32:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81078 00:15:37.242 [2024-11-18 13:32:07.167323] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.181 13:32:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:38.181 00:15:38.181 real 0m7.580s 00:15:38.181 user 0m11.832s 00:15:38.181 sys 0m1.418s 00:15:38.181 13:32:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.181 13:32:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.181 ************************************ 00:15:38.181 END TEST raid5f_superblock_test 00:15:38.181 ************************************ 00:15:38.441 13:32:08 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:38.441 13:32:08 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:38.441 13:32:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:38.441 13:32:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.441 13:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.441 ************************************ 00:15:38.441 START TEST raid5f_rebuild_test 00:15:38.441 ************************************ 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81522 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.441 13:32:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81522 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81522 ']' 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.442 13:32:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.442 [2024-11-18 13:32:08.415338] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:38.442 [2024-11-18 13:32:08.415909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81522 ] 00:15:38.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.442 Zero copy mechanism will not be used. 00:15:38.702 [2024-11-18 13:32:08.579528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.702 [2024-11-18 13:32:08.685830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.962 [2024-11-18 13:32:08.868907] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.962 [2024-11-18 13:32:08.868946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.223 BaseBdev1_malloc 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.223 [2024-11-18 13:32:09.266462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.223 [2024-11-18 13:32:09.266523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.223 [2024-11-18 13:32:09.266548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.223 [2024-11-18 13:32:09.266565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.223 [2024-11-18 13:32:09.268565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.223 [2024-11-18 13:32:09.268603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.223 BaseBdev1 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.223 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 BaseBdev2_malloc 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 [2024-11-18 13:32:09.319511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.483 [2024-11-18 13:32:09.319565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.483 [2024-11-18 13:32:09.319583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.483 [2024-11-18 13:32:09.319596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.483 [2024-11-18 13:32:09.321489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.483 [2024-11-18 13:32:09.321526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.483 BaseBdev2 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 BaseBdev3_malloc 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 [2024-11-18 13:32:09.407107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:39.483 [2024-11-18 13:32:09.407169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.483 [2024-11-18 13:32:09.407191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:39.483 [2024-11-18 13:32:09.407202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.483 [2024-11-18 13:32:09.409205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.483 [2024-11-18 13:32:09.409243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:39.483 BaseBdev3 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 spare_malloc 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 spare_delay 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 [2024-11-18 13:32:09.471771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.483 [2024-11-18 13:32:09.471826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.483 [2024-11-18 13:32:09.471845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:39.483 [2024-11-18 13:32:09.471857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.483 [2024-11-18 13:32:09.473869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.483 [2024-11-18 13:32:09.473910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.483 spare 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 [2024-11-18 13:32:09.483808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.483 [2024-11-18 13:32:09.485515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.483 [2024-11-18 13:32:09.485576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.483 [2024-11-18 13:32:09.485654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.483 [2024-11-18 13:32:09.485666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:39.483 [2024-11-18 13:32:09.485903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.483 [2024-11-18 13:32:09.491363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.483 [2024-11-18 13:32:09.491390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.483 [2024-11-18 13:32:09.491585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.742 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.742 "name": "raid_bdev1", 00:15:39.742 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:39.742 "strip_size_kb": 64, 00:15:39.742 "state": "online", 00:15:39.742 "raid_level": "raid5f", 00:15:39.742 "superblock": false, 00:15:39.742 "num_base_bdevs": 3, 00:15:39.742 "num_base_bdevs_discovered": 3, 00:15:39.742 "num_base_bdevs_operational": 3, 00:15:39.742 "base_bdevs_list": [ 00:15:39.742 { 00:15:39.742 "name": "BaseBdev1", 00:15:39.742 "uuid": "054babb1-7a07-5cfd-870d-f5ebf8de6283", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 }, 00:15:39.742 { 00:15:39.742 "name": "BaseBdev2", 00:15:39.742 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 }, 00:15:39.743 { 00:15:39.743 "name": "BaseBdev3", 00:15:39.743 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:39.743 "is_configured": true, 00:15:39.743 "data_offset": 0, 00:15:39.743 "data_size": 65536 00:15:39.743 } 00:15:39.743 ] 00:15:39.743 }' 00:15:39.743 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.743 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.002 [2024-11-18 13:32:09.917247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.002 13:32:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.002 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.261 [2024-11-18 13:32:10.188628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.261 /dev/nbd0 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.261 1+0 records in 00:15:40.261 1+0 records out 00:15:40.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042995 s, 9.5 MB/s 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:40.261 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:40.849 512+0 records in 00:15:40.849 512+0 records out 00:15:40.849 67108864 bytes (67 MB, 64 MiB) copied, 0.479915 s, 140 MB/s 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.849 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.109 [2024-11-18 13:32:10.968995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.109 [2024-11-18 13:32:10.983905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.109 13:32:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.109 13:32:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.109 13:32:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.109 "name": "raid_bdev1", 00:15:41.109 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:41.109 "strip_size_kb": 64, 00:15:41.109 "state": "online", 00:15:41.109 "raid_level": "raid5f", 00:15:41.109 "superblock": false, 00:15:41.109 "num_base_bdevs": 3, 00:15:41.109 "num_base_bdevs_discovered": 2, 00:15:41.109 "num_base_bdevs_operational": 2, 00:15:41.109 "base_bdevs_list": [ 00:15:41.109 { 00:15:41.109 "name": null, 00:15:41.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.109 "is_configured": false, 00:15:41.109 "data_offset": 0, 00:15:41.109 "data_size": 65536 00:15:41.109 }, 00:15:41.109 { 00:15:41.109 "name": "BaseBdev2", 00:15:41.109 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:41.109 "is_configured": true, 00:15:41.109 "data_offset": 0, 00:15:41.109 "data_size": 65536 00:15:41.109 }, 00:15:41.109 { 00:15:41.109 "name": "BaseBdev3", 00:15:41.109 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:41.109 "is_configured": true, 00:15:41.109 "data_offset": 0, 00:15:41.109 "data_size": 65536 00:15:41.109 } 00:15:41.109 ] 00:15:41.109 }' 00:15:41.109 13:32:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.109 13:32:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.680 13:32:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.680 13:32:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.680 13:32:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.680 [2024-11-18 13:32:11.463110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.680 [2024-11-18 13:32:11.480004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:41.680 13:32:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.680 13:32:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.680 [2024-11-18 13:32:11.487341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.620 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.620 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.620 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.621 "name": "raid_bdev1", 00:15:42.621 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:42.621 "strip_size_kb": 64, 00:15:42.621 "state": "online", 00:15:42.621 "raid_level": "raid5f", 00:15:42.621 "superblock": false, 00:15:42.621 "num_base_bdevs": 3, 00:15:42.621 "num_base_bdevs_discovered": 3, 00:15:42.621 "num_base_bdevs_operational": 3, 00:15:42.621 "process": { 00:15:42.621 "type": "rebuild", 00:15:42.621 "target": "spare", 00:15:42.621 "progress": { 00:15:42.621 "blocks": 20480, 00:15:42.621 "percent": 15 00:15:42.621 } 00:15:42.621 }, 00:15:42.621 "base_bdevs_list": [ 00:15:42.621 { 00:15:42.621 "name": "spare", 00:15:42.621 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:42.621 "is_configured": true, 00:15:42.621 "data_offset": 0, 00:15:42.621 "data_size": 65536 00:15:42.621 }, 00:15:42.621 { 00:15:42.621 "name": "BaseBdev2", 00:15:42.621 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:42.621 "is_configured": true, 00:15:42.621 "data_offset": 0, 00:15:42.621 "data_size": 65536 00:15:42.621 }, 00:15:42.621 { 00:15:42.621 "name": "BaseBdev3", 00:15:42.621 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:42.621 "is_configured": true, 00:15:42.621 "data_offset": 0, 00:15:42.621 "data_size": 65536 00:15:42.621 } 00:15:42.621 ] 00:15:42.621 }' 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 [2024-11-18 13:32:12.634072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.880 [2024-11-18 13:32:12.695878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.880 [2024-11-18 13:32:12.695949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.880 [2024-11-18 13:32:12.695966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.880 [2024-11-18 13:32:12.695974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.880 "name": "raid_bdev1", 00:15:42.880 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:42.880 "strip_size_kb": 64, 00:15:42.880 "state": "online", 00:15:42.880 "raid_level": "raid5f", 00:15:42.880 "superblock": false, 00:15:42.880 "num_base_bdevs": 3, 00:15:42.880 "num_base_bdevs_discovered": 2, 00:15:42.880 "num_base_bdevs_operational": 2, 00:15:42.880 "base_bdevs_list": [ 00:15:42.880 { 00:15:42.880 "name": null, 00:15:42.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.880 "is_configured": false, 00:15:42.880 "data_offset": 0, 00:15:42.880 "data_size": 65536 00:15:42.880 }, 00:15:42.880 { 00:15:42.880 "name": "BaseBdev2", 00:15:42.880 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:42.880 "is_configured": true, 00:15:42.880 "data_offset": 0, 00:15:42.880 "data_size": 65536 00:15:42.880 }, 00:15:42.880 { 00:15:42.880 "name": "BaseBdev3", 00:15:42.880 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:42.880 "is_configured": true, 00:15:42.880 "data_offset": 0, 00:15:42.880 "data_size": 65536 00:15:42.880 } 00:15:42.880 ] 00:15:42.880 }' 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.880 13:32:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.243 "name": "raid_bdev1", 00:15:43.243 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:43.243 "strip_size_kb": 64, 00:15:43.243 "state": "online", 00:15:43.243 "raid_level": "raid5f", 00:15:43.243 "superblock": false, 00:15:43.243 "num_base_bdevs": 3, 00:15:43.243 "num_base_bdevs_discovered": 2, 00:15:43.243 "num_base_bdevs_operational": 2, 00:15:43.243 "base_bdevs_list": [ 00:15:43.243 { 00:15:43.243 "name": null, 00:15:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.243 "is_configured": false, 00:15:43.243 "data_offset": 0, 00:15:43.243 "data_size": 65536 00:15:43.243 }, 00:15:43.243 { 00:15:43.243 "name": "BaseBdev2", 00:15:43.243 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:43.243 "is_configured": true, 00:15:43.243 "data_offset": 0, 00:15:43.243 "data_size": 65536 00:15:43.243 }, 00:15:43.243 { 00:15:43.243 "name": "BaseBdev3", 00:15:43.243 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:43.243 "is_configured": true, 00:15:43.243 "data_offset": 0, 00:15:43.243 "data_size": 65536 00:15:43.243 } 00:15:43.243 ] 00:15:43.243 }' 00:15:43.243 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.502 [2024-11-18 13:32:13.353508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.502 [2024-11-18 13:32:13.369348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.502 13:32:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.502 [2024-11-18 13:32:13.377003] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.440 "name": "raid_bdev1", 00:15:44.440 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:44.440 "strip_size_kb": 64, 00:15:44.440 "state": "online", 00:15:44.440 "raid_level": "raid5f", 00:15:44.440 "superblock": false, 00:15:44.440 "num_base_bdevs": 3, 00:15:44.440 "num_base_bdevs_discovered": 3, 00:15:44.440 "num_base_bdevs_operational": 3, 00:15:44.440 "process": { 00:15:44.440 "type": "rebuild", 00:15:44.440 "target": "spare", 00:15:44.440 "progress": { 00:15:44.440 "blocks": 20480, 00:15:44.440 "percent": 15 00:15:44.440 } 00:15:44.440 }, 00:15:44.440 "base_bdevs_list": [ 00:15:44.440 { 00:15:44.440 "name": "spare", 00:15:44.440 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:44.440 "is_configured": true, 00:15:44.440 "data_offset": 0, 00:15:44.440 "data_size": 65536 00:15:44.440 }, 00:15:44.440 { 00:15:44.440 "name": "BaseBdev2", 00:15:44.440 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:44.440 "is_configured": true, 00:15:44.440 "data_offset": 0, 00:15:44.440 "data_size": 65536 00:15:44.440 }, 00:15:44.440 { 00:15:44.440 "name": "BaseBdev3", 00:15:44.440 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:44.440 "is_configured": true, 00:15:44.440 "data_offset": 0, 00:15:44.440 "data_size": 65536 00:15:44.440 } 00:15:44.440 ] 00:15:44.440 }' 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.440 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.698 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.698 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=548 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.699 "name": "raid_bdev1", 00:15:44.699 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:44.699 "strip_size_kb": 64, 00:15:44.699 "state": "online", 00:15:44.699 "raid_level": "raid5f", 00:15:44.699 "superblock": false, 00:15:44.699 "num_base_bdevs": 3, 00:15:44.699 "num_base_bdevs_discovered": 3, 00:15:44.699 "num_base_bdevs_operational": 3, 00:15:44.699 "process": { 00:15:44.699 "type": "rebuild", 00:15:44.699 "target": "spare", 00:15:44.699 "progress": { 00:15:44.699 "blocks": 22528, 00:15:44.699 "percent": 17 00:15:44.699 } 00:15:44.699 }, 00:15:44.699 "base_bdevs_list": [ 00:15:44.699 { 00:15:44.699 "name": "spare", 00:15:44.699 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:44.699 "is_configured": true, 00:15:44.699 "data_offset": 0, 00:15:44.699 "data_size": 65536 00:15:44.699 }, 00:15:44.699 { 00:15:44.699 "name": "BaseBdev2", 00:15:44.699 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:44.699 "is_configured": true, 00:15:44.699 "data_offset": 0, 00:15:44.699 "data_size": 65536 00:15:44.699 }, 00:15:44.699 { 00:15:44.699 "name": "BaseBdev3", 00:15:44.699 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:44.699 "is_configured": true, 00:15:44.699 "data_offset": 0, 00:15:44.699 "data_size": 65536 00:15:44.699 } 00:15:44.699 ] 00:15:44.699 }' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.699 13:32:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.636 13:32:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.895 "name": "raid_bdev1", 00:15:45.895 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:45.895 "strip_size_kb": 64, 00:15:45.895 "state": "online", 00:15:45.895 "raid_level": "raid5f", 00:15:45.895 "superblock": false, 00:15:45.895 "num_base_bdevs": 3, 00:15:45.895 "num_base_bdevs_discovered": 3, 00:15:45.895 "num_base_bdevs_operational": 3, 00:15:45.895 "process": { 00:15:45.895 "type": "rebuild", 00:15:45.895 "target": "spare", 00:15:45.895 "progress": { 00:15:45.895 "blocks": 45056, 00:15:45.895 "percent": 34 00:15:45.895 } 00:15:45.895 }, 00:15:45.895 "base_bdevs_list": [ 00:15:45.895 { 00:15:45.895 "name": "spare", 00:15:45.895 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:45.895 "is_configured": true, 00:15:45.895 "data_offset": 0, 00:15:45.895 "data_size": 65536 00:15:45.895 }, 00:15:45.895 { 00:15:45.895 "name": "BaseBdev2", 00:15:45.895 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:45.895 "is_configured": true, 00:15:45.895 "data_offset": 0, 00:15:45.895 "data_size": 65536 00:15:45.895 }, 00:15:45.895 { 00:15:45.895 "name": "BaseBdev3", 00:15:45.895 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:45.895 "is_configured": true, 00:15:45.895 "data_offset": 0, 00:15:45.895 "data_size": 65536 00:15:45.895 } 00:15:45.895 ] 00:15:45.895 }' 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.895 13:32:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.834 "name": "raid_bdev1", 00:15:46.834 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:46.834 "strip_size_kb": 64, 00:15:46.834 "state": "online", 00:15:46.834 "raid_level": "raid5f", 00:15:46.834 "superblock": false, 00:15:46.834 "num_base_bdevs": 3, 00:15:46.834 "num_base_bdevs_discovered": 3, 00:15:46.834 "num_base_bdevs_operational": 3, 00:15:46.834 "process": { 00:15:46.834 "type": "rebuild", 00:15:46.834 "target": "spare", 00:15:46.834 "progress": { 00:15:46.834 "blocks": 69632, 00:15:46.834 "percent": 53 00:15:46.834 } 00:15:46.834 }, 00:15:46.834 "base_bdevs_list": [ 00:15:46.834 { 00:15:46.834 "name": "spare", 00:15:46.834 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:46.834 "is_configured": true, 00:15:46.834 "data_offset": 0, 00:15:46.834 "data_size": 65536 00:15:46.834 }, 00:15:46.834 { 00:15:46.834 "name": "BaseBdev2", 00:15:46.834 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:46.834 "is_configured": true, 00:15:46.834 "data_offset": 0, 00:15:46.834 "data_size": 65536 00:15:46.834 }, 00:15:46.834 { 00:15:46.834 "name": "BaseBdev3", 00:15:46.834 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:46.834 "is_configured": true, 00:15:46.834 "data_offset": 0, 00:15:46.834 "data_size": 65536 00:15:46.834 } 00:15:46.834 ] 00:15:46.834 }' 00:15:46.834 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.110 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.110 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.110 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.110 13:32:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.048 13:32:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.048 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.048 "name": "raid_bdev1", 00:15:48.048 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:48.048 "strip_size_kb": 64, 00:15:48.048 "state": "online", 00:15:48.048 "raid_level": "raid5f", 00:15:48.048 "superblock": false, 00:15:48.048 "num_base_bdevs": 3, 00:15:48.048 "num_base_bdevs_discovered": 3, 00:15:48.048 "num_base_bdevs_operational": 3, 00:15:48.048 "process": { 00:15:48.048 "type": "rebuild", 00:15:48.048 "target": "spare", 00:15:48.048 "progress": { 00:15:48.048 "blocks": 92160, 00:15:48.048 "percent": 70 00:15:48.048 } 00:15:48.048 }, 00:15:48.048 "base_bdevs_list": [ 00:15:48.048 { 00:15:48.048 "name": "spare", 00:15:48.048 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:48.048 "is_configured": true, 00:15:48.048 "data_offset": 0, 00:15:48.048 "data_size": 65536 00:15:48.048 }, 00:15:48.048 { 00:15:48.048 "name": "BaseBdev2", 00:15:48.048 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:48.048 "is_configured": true, 00:15:48.048 "data_offset": 0, 00:15:48.048 "data_size": 65536 00:15:48.048 }, 00:15:48.048 { 00:15:48.048 "name": "BaseBdev3", 00:15:48.048 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:48.048 "is_configured": true, 00:15:48.048 "data_offset": 0, 00:15:48.048 "data_size": 65536 00:15:48.048 } 00:15:48.048 ] 00:15:48.048 }' 00:15:48.048 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.048 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.048 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.307 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.307 13:32:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.247 "name": "raid_bdev1", 00:15:49.247 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:49.247 "strip_size_kb": 64, 00:15:49.247 "state": "online", 00:15:49.247 "raid_level": "raid5f", 00:15:49.247 "superblock": false, 00:15:49.247 "num_base_bdevs": 3, 00:15:49.247 "num_base_bdevs_discovered": 3, 00:15:49.247 "num_base_bdevs_operational": 3, 00:15:49.247 "process": { 00:15:49.247 "type": "rebuild", 00:15:49.247 "target": "spare", 00:15:49.247 "progress": { 00:15:49.247 "blocks": 116736, 00:15:49.247 "percent": 89 00:15:49.247 } 00:15:49.247 }, 00:15:49.247 "base_bdevs_list": [ 00:15:49.247 { 00:15:49.247 "name": "spare", 00:15:49.247 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:49.247 "is_configured": true, 00:15:49.247 "data_offset": 0, 00:15:49.247 "data_size": 65536 00:15:49.247 }, 00:15:49.247 { 00:15:49.247 "name": "BaseBdev2", 00:15:49.247 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:49.247 "is_configured": true, 00:15:49.247 "data_offset": 0, 00:15:49.247 "data_size": 65536 00:15:49.247 }, 00:15:49.247 { 00:15:49.247 "name": "BaseBdev3", 00:15:49.247 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:49.247 "is_configured": true, 00:15:49.247 "data_offset": 0, 00:15:49.247 "data_size": 65536 00:15:49.247 } 00:15:49.247 ] 00:15:49.247 }' 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.247 13:32:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.815 [2024-11-18 13:32:19.814380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:49.815 [2024-11-18 13:32:19.814454] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:49.815 [2024-11-18 13:32:19.814489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.383 "name": "raid_bdev1", 00:15:50.383 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:50.383 "strip_size_kb": 64, 00:15:50.383 "state": "online", 00:15:50.383 "raid_level": "raid5f", 00:15:50.383 "superblock": false, 00:15:50.383 "num_base_bdevs": 3, 00:15:50.383 "num_base_bdevs_discovered": 3, 00:15:50.383 "num_base_bdevs_operational": 3, 00:15:50.383 "base_bdevs_list": [ 00:15:50.383 { 00:15:50.383 "name": "spare", 00:15:50.383 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:50.383 "is_configured": true, 00:15:50.383 "data_offset": 0, 00:15:50.383 "data_size": 65536 00:15:50.383 }, 00:15:50.383 { 00:15:50.383 "name": "BaseBdev2", 00:15:50.383 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:50.383 "is_configured": true, 00:15:50.383 "data_offset": 0, 00:15:50.383 "data_size": 65536 00:15:50.383 }, 00:15:50.383 { 00:15:50.383 "name": "BaseBdev3", 00:15:50.383 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:50.383 "is_configured": true, 00:15:50.383 "data_offset": 0, 00:15:50.383 "data_size": 65536 00:15:50.383 } 00:15:50.383 ] 00:15:50.383 }' 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.383 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.384 "name": "raid_bdev1", 00:15:50.384 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:50.384 "strip_size_kb": 64, 00:15:50.384 "state": "online", 00:15:50.384 "raid_level": "raid5f", 00:15:50.384 "superblock": false, 00:15:50.384 "num_base_bdevs": 3, 00:15:50.384 "num_base_bdevs_discovered": 3, 00:15:50.384 "num_base_bdevs_operational": 3, 00:15:50.384 "base_bdevs_list": [ 00:15:50.384 { 00:15:50.384 "name": "spare", 00:15:50.384 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:50.384 "is_configured": true, 00:15:50.384 "data_offset": 0, 00:15:50.384 "data_size": 65536 00:15:50.384 }, 00:15:50.384 { 00:15:50.384 "name": "BaseBdev2", 00:15:50.384 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:50.384 "is_configured": true, 00:15:50.384 "data_offset": 0, 00:15:50.384 "data_size": 65536 00:15:50.384 }, 00:15:50.384 { 00:15:50.384 "name": "BaseBdev3", 00:15:50.384 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:50.384 "is_configured": true, 00:15:50.384 "data_offset": 0, 00:15:50.384 "data_size": 65536 00:15:50.384 } 00:15:50.384 ] 00:15:50.384 }' 00:15:50.384 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.643 "name": "raid_bdev1", 00:15:50.643 "uuid": "14c76353-f4ee-4344-80ab-59b6a414061a", 00:15:50.643 "strip_size_kb": 64, 00:15:50.643 "state": "online", 00:15:50.643 "raid_level": "raid5f", 00:15:50.643 "superblock": false, 00:15:50.643 "num_base_bdevs": 3, 00:15:50.643 "num_base_bdevs_discovered": 3, 00:15:50.643 "num_base_bdevs_operational": 3, 00:15:50.643 "base_bdevs_list": [ 00:15:50.643 { 00:15:50.643 "name": "spare", 00:15:50.643 "uuid": "6847c6d6-963b-5010-90ac-4622bef3a565", 00:15:50.643 "is_configured": true, 00:15:50.643 "data_offset": 0, 00:15:50.643 "data_size": 65536 00:15:50.643 }, 00:15:50.643 { 00:15:50.643 "name": "BaseBdev2", 00:15:50.643 "uuid": "496b0af7-c2e6-58cf-bc40-668b4275c5bf", 00:15:50.643 "is_configured": true, 00:15:50.643 "data_offset": 0, 00:15:50.643 "data_size": 65536 00:15:50.643 }, 00:15:50.643 { 00:15:50.643 "name": "BaseBdev3", 00:15:50.643 "uuid": "1183e295-0049-54a5-9e50-b3aa3825f5ac", 00:15:50.643 "is_configured": true, 00:15:50.643 "data_offset": 0, 00:15:50.643 "data_size": 65536 00:15:50.643 } 00:15:50.643 ] 00:15:50.643 }' 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.643 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 [2024-11-18 13:32:20.989864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.211 [2024-11-18 13:32:20.989898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.211 [2024-11-18 13:32:20.989981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.211 [2024-11-18 13:32:20.990054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.211 [2024-11-18 13:32:20.990076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:51.211 13:32:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.211 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:51.212 /dev/nbd0 00:15:51.212 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.470 1+0 records in 00:15:51.470 1+0 records out 00:15:51.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477079 s, 8.6 MB/s 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:51.470 /dev/nbd1 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.470 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.729 1+0 records in 00:15:51.729 1+0 records out 00:15:51.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394713 s, 10.4 MB/s 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.729 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.989 13:32:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81522 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81522 ']' 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81522 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81522 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.249 killing process with pid 81522 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81522' 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81522 00:15:52.249 Received shutdown signal, test time was about 60.000000 seconds 00:15:52.249 00:15:52.249 Latency(us) 00:15:52.249 [2024-11-18T13:32:22.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.249 [2024-11-18T13:32:22.303Z] =================================================================================================================== 00:15:52.249 [2024-11-18T13:32:22.303Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.249 [2024-11-18 13:32:22.160781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.249 13:32:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81522 00:15:52.508 [2024-11-18 13:32:22.534049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.888 00:15:53.888 real 0m15.237s 00:15:53.888 user 0m18.640s 00:15:53.888 sys 0m2.212s 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.888 ************************************ 00:15:53.888 END TEST raid5f_rebuild_test 00:15:53.888 ************************************ 00:15:53.888 13:32:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:53.888 13:32:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.888 13:32:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.888 13:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.888 ************************************ 00:15:53.888 START TEST raid5f_rebuild_test_sb 00:15:53.888 ************************************ 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81961 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81961 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81961 ']' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.888 13:32:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.888 [2024-11-18 13:32:23.731188] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:53.888 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.888 Zero copy mechanism will not be used. 00:15:53.888 [2024-11-18 13:32:23.731728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81961 ] 00:15:53.888 [2024-11-18 13:32:23.905511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.148 [2024-11-18 13:32:24.011381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.408 [2024-11-18 13:32:24.207409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.408 [2024-11-18 13:32:24.207442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 BaseBdev1_malloc 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.667 [2024-11-18 13:32:24.607616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.667 [2024-11-18 13:32:24.607688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.667 [2024-11-18 13:32:24.607711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.667 [2024-11-18 13:32:24.607721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.667 [2024-11-18 13:32:24.609804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.667 [2024-11-18 13:32:24.609843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.667 BaseBdev1 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.667 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.668 BaseBdev2_malloc 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.668 [2024-11-18 13:32:24.659875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.668 [2024-11-18 13:32:24.659949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.668 [2024-11-18 13:32:24.659966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.668 [2024-11-18 13:32:24.659978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.668 [2024-11-18 13:32:24.661883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.668 [2024-11-18 13:32:24.661921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.668 BaseBdev2 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.668 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 BaseBdev3_malloc 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 [2024-11-18 13:32:24.746433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.927 [2024-11-18 13:32:24.746499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.927 [2024-11-18 13:32:24.746519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.927 [2024-11-18 13:32:24.746529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.927 [2024-11-18 13:32:24.748510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.927 [2024-11-18 13:32:24.748565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.927 BaseBdev3 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 spare_malloc 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 spare_delay 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 [2024-11-18 13:32:24.811756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.927 [2024-11-18 13:32:24.811805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.927 [2024-11-18 13:32:24.811821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:54.927 [2024-11-18 13:32:24.811830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.927 [2024-11-18 13:32:24.813800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.927 [2024-11-18 13:32:24.813854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.927 spare 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.927 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.927 [2024-11-18 13:32:24.823803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.927 [2024-11-18 13:32:24.825500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.927 [2024-11-18 13:32:24.825575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.928 [2024-11-18 13:32:24.825737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.928 [2024-11-18 13:32:24.825751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.928 [2024-11-18 13:32:24.825982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:54.928 [2024-11-18 13:32:24.831092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.928 [2024-11-18 13:32:24.831138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.928 [2024-11-18 13:32:24.831314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.928 "name": "raid_bdev1", 00:15:54.928 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:54.928 "strip_size_kb": 64, 00:15:54.928 "state": "online", 00:15:54.928 "raid_level": "raid5f", 00:15:54.928 "superblock": true, 00:15:54.928 "num_base_bdevs": 3, 00:15:54.928 "num_base_bdevs_discovered": 3, 00:15:54.928 "num_base_bdevs_operational": 3, 00:15:54.928 "base_bdevs_list": [ 00:15:54.928 { 00:15:54.928 "name": "BaseBdev1", 00:15:54.928 "uuid": "26edf824-752c-5604-8466-b3ce64a952de", 00:15:54.928 "is_configured": true, 00:15:54.928 "data_offset": 2048, 00:15:54.928 "data_size": 63488 00:15:54.928 }, 00:15:54.928 { 00:15:54.928 "name": "BaseBdev2", 00:15:54.928 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:54.928 "is_configured": true, 00:15:54.928 "data_offset": 2048, 00:15:54.928 "data_size": 63488 00:15:54.928 }, 00:15:54.928 { 00:15:54.928 "name": "BaseBdev3", 00:15:54.928 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:54.928 "is_configured": true, 00:15:54.928 "data_offset": 2048, 00:15:54.928 "data_size": 63488 00:15:54.928 } 00:15:54.928 ] 00:15:54.928 }' 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.928 13:32:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.496 [2024-11-18 13:32:25.332723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.496 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.497 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:55.756 [2024-11-18 13:32:25.596156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:55.756 /dev/nbd0 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.756 1+0 records in 00:15:55.756 1+0 records out 00:15:55.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446857 s, 9.2 MB/s 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:55.756 13:32:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:56.326 496+0 records in 00:15:56.326 496+0 records out 00:15:56.326 65011712 bytes (65 MB, 62 MiB) copied, 0.525262 s, 124 MB/s 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.326 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.586 [2024-11-18 13:32:26.434065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.586 [2024-11-18 13:32:26.464495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.586 "name": "raid_bdev1", 00:15:56.586 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:56.586 "strip_size_kb": 64, 00:15:56.586 "state": "online", 00:15:56.586 "raid_level": "raid5f", 00:15:56.586 "superblock": true, 00:15:56.586 "num_base_bdevs": 3, 00:15:56.586 "num_base_bdevs_discovered": 2, 00:15:56.586 "num_base_bdevs_operational": 2, 00:15:56.586 "base_bdevs_list": [ 00:15:56.586 { 00:15:56.586 "name": null, 00:15:56.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.586 "is_configured": false, 00:15:56.586 "data_offset": 0, 00:15:56.586 "data_size": 63488 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "name": "BaseBdev2", 00:15:56.586 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:56.586 "is_configured": true, 00:15:56.586 "data_offset": 2048, 00:15:56.586 "data_size": 63488 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "name": "BaseBdev3", 00:15:56.586 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:56.586 "is_configured": true, 00:15:56.586 "data_offset": 2048, 00:15:56.586 "data_size": 63488 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }' 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.586 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.155 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.155 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.155 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.155 [2024-11-18 13:32:26.903917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.155 [2024-11-18 13:32:26.919242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:57.155 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.155 13:32:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.155 [2024-11-18 13:32:26.926290] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.092 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.092 "name": "raid_bdev1", 00:15:58.092 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:58.092 "strip_size_kb": 64, 00:15:58.092 "state": "online", 00:15:58.092 "raid_level": "raid5f", 00:15:58.092 "superblock": true, 00:15:58.092 "num_base_bdevs": 3, 00:15:58.092 "num_base_bdevs_discovered": 3, 00:15:58.092 "num_base_bdevs_operational": 3, 00:15:58.092 "process": { 00:15:58.092 "type": "rebuild", 00:15:58.092 "target": "spare", 00:15:58.092 "progress": { 00:15:58.092 "blocks": 20480, 00:15:58.092 "percent": 16 00:15:58.092 } 00:15:58.092 }, 00:15:58.092 "base_bdevs_list": [ 00:15:58.092 { 00:15:58.092 "name": "spare", 00:15:58.092 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:15:58.092 "is_configured": true, 00:15:58.092 "data_offset": 2048, 00:15:58.092 "data_size": 63488 00:15:58.092 }, 00:15:58.092 { 00:15:58.092 "name": "BaseBdev2", 00:15:58.092 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:58.092 "is_configured": true, 00:15:58.092 "data_offset": 2048, 00:15:58.092 "data_size": 63488 00:15:58.092 }, 00:15:58.092 { 00:15:58.092 "name": "BaseBdev3", 00:15:58.092 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:58.092 "is_configured": true, 00:15:58.092 "data_offset": 2048, 00:15:58.092 "data_size": 63488 00:15:58.093 } 00:15:58.093 ] 00:15:58.093 }' 00:15:58.093 13:32:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.093 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.093 [2024-11-18 13:32:28.085389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.093 [2024-11-18 13:32:28.133509] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.093 [2024-11-18 13:32:28.133561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.093 [2024-11-18 13:32:28.133579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.093 [2024-11-18 13:32:28.133587] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.353 "name": "raid_bdev1", 00:15:58.353 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:58.353 "strip_size_kb": 64, 00:15:58.353 "state": "online", 00:15:58.353 "raid_level": "raid5f", 00:15:58.353 "superblock": true, 00:15:58.353 "num_base_bdevs": 3, 00:15:58.353 "num_base_bdevs_discovered": 2, 00:15:58.353 "num_base_bdevs_operational": 2, 00:15:58.353 "base_bdevs_list": [ 00:15:58.353 { 00:15:58.353 "name": null, 00:15:58.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.353 "is_configured": false, 00:15:58.353 "data_offset": 0, 00:15:58.353 "data_size": 63488 00:15:58.353 }, 00:15:58.353 { 00:15:58.353 "name": "BaseBdev2", 00:15:58.353 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:58.353 "is_configured": true, 00:15:58.353 "data_offset": 2048, 00:15:58.353 "data_size": 63488 00:15:58.353 }, 00:15:58.353 { 00:15:58.353 "name": "BaseBdev3", 00:15:58.353 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:58.353 "is_configured": true, 00:15:58.353 "data_offset": 2048, 00:15:58.353 "data_size": 63488 00:15:58.353 } 00:15:58.353 ] 00:15:58.353 }' 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.353 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.613 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.873 "name": "raid_bdev1", 00:15:58.873 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:58.873 "strip_size_kb": 64, 00:15:58.873 "state": "online", 00:15:58.873 "raid_level": "raid5f", 00:15:58.873 "superblock": true, 00:15:58.873 "num_base_bdevs": 3, 00:15:58.873 "num_base_bdevs_discovered": 2, 00:15:58.873 "num_base_bdevs_operational": 2, 00:15:58.873 "base_bdevs_list": [ 00:15:58.873 { 00:15:58.873 "name": null, 00:15:58.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.873 "is_configured": false, 00:15:58.873 "data_offset": 0, 00:15:58.873 "data_size": 63488 00:15:58.873 }, 00:15:58.873 { 00:15:58.873 "name": "BaseBdev2", 00:15:58.873 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:58.873 "is_configured": true, 00:15:58.873 "data_offset": 2048, 00:15:58.873 "data_size": 63488 00:15:58.873 }, 00:15:58.873 { 00:15:58.873 "name": "BaseBdev3", 00:15:58.873 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:58.873 "is_configured": true, 00:15:58.873 "data_offset": 2048, 00:15:58.873 "data_size": 63488 00:15:58.873 } 00:15:58.873 ] 00:15:58.873 }' 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.873 [2024-11-18 13:32:28.783321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.873 [2024-11-18 13:32:28.798067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.873 13:32:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:58.873 [2024-11-18 13:32:28.805335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.827 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.828 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.828 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.828 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.828 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.828 "name": "raid_bdev1", 00:15:59.828 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:15:59.828 "strip_size_kb": 64, 00:15:59.828 "state": "online", 00:15:59.828 "raid_level": "raid5f", 00:15:59.828 "superblock": true, 00:15:59.828 "num_base_bdevs": 3, 00:15:59.828 "num_base_bdevs_discovered": 3, 00:15:59.828 "num_base_bdevs_operational": 3, 00:15:59.828 "process": { 00:15:59.828 "type": "rebuild", 00:15:59.828 "target": "spare", 00:15:59.828 "progress": { 00:15:59.828 "blocks": 20480, 00:15:59.828 "percent": 16 00:15:59.828 } 00:15:59.828 }, 00:15:59.828 "base_bdevs_list": [ 00:15:59.828 { 00:15:59.828 "name": "spare", 00:15:59.828 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:15:59.828 "is_configured": true, 00:15:59.828 "data_offset": 2048, 00:15:59.828 "data_size": 63488 00:15:59.828 }, 00:15:59.828 { 00:15:59.828 "name": "BaseBdev2", 00:15:59.828 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:15:59.828 "is_configured": true, 00:15:59.828 "data_offset": 2048, 00:15:59.828 "data_size": 63488 00:15:59.828 }, 00:15:59.828 { 00:15:59.828 "name": "BaseBdev3", 00:15:59.828 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:15:59.828 "is_configured": true, 00:15:59.828 "data_offset": 2048, 00:15:59.828 "data_size": 63488 00:15:59.828 } 00:15:59.828 ] 00:15:59.828 }' 00:15:59.828 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:00.087 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=563 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.087 13:32:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.087 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.087 "name": "raid_bdev1", 00:16:00.087 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:00.087 "strip_size_kb": 64, 00:16:00.087 "state": "online", 00:16:00.087 "raid_level": "raid5f", 00:16:00.087 "superblock": true, 00:16:00.087 "num_base_bdevs": 3, 00:16:00.087 "num_base_bdevs_discovered": 3, 00:16:00.087 "num_base_bdevs_operational": 3, 00:16:00.087 "process": { 00:16:00.087 "type": "rebuild", 00:16:00.087 "target": "spare", 00:16:00.087 "progress": { 00:16:00.087 "blocks": 22528, 00:16:00.087 "percent": 17 00:16:00.087 } 00:16:00.087 }, 00:16:00.087 "base_bdevs_list": [ 00:16:00.087 { 00:16:00.087 "name": "spare", 00:16:00.087 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 2048, 00:16:00.087 "data_size": 63488 00:16:00.087 }, 00:16:00.087 { 00:16:00.087 "name": "BaseBdev2", 00:16:00.087 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 2048, 00:16:00.087 "data_size": 63488 00:16:00.087 }, 00:16:00.087 { 00:16:00.087 "name": "BaseBdev3", 00:16:00.087 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:00.087 "is_configured": true, 00:16:00.087 "data_offset": 2048, 00:16:00.087 "data_size": 63488 00:16:00.087 } 00:16:00.087 ] 00:16:00.088 }' 00:16:00.088 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.088 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.088 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.088 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.088 13:32:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.469 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.469 "name": "raid_bdev1", 00:16:01.469 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:01.469 "strip_size_kb": 64, 00:16:01.469 "state": "online", 00:16:01.469 "raid_level": "raid5f", 00:16:01.469 "superblock": true, 00:16:01.469 "num_base_bdevs": 3, 00:16:01.469 "num_base_bdevs_discovered": 3, 00:16:01.469 "num_base_bdevs_operational": 3, 00:16:01.469 "process": { 00:16:01.469 "type": "rebuild", 00:16:01.469 "target": "spare", 00:16:01.469 "progress": { 00:16:01.469 "blocks": 45056, 00:16:01.469 "percent": 35 00:16:01.469 } 00:16:01.469 }, 00:16:01.469 "base_bdevs_list": [ 00:16:01.469 { 00:16:01.469 "name": "spare", 00:16:01.469 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:01.470 "is_configured": true, 00:16:01.470 "data_offset": 2048, 00:16:01.470 "data_size": 63488 00:16:01.470 }, 00:16:01.470 { 00:16:01.470 "name": "BaseBdev2", 00:16:01.470 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:01.470 "is_configured": true, 00:16:01.470 "data_offset": 2048, 00:16:01.470 "data_size": 63488 00:16:01.470 }, 00:16:01.470 { 00:16:01.470 "name": "BaseBdev3", 00:16:01.470 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:01.470 "is_configured": true, 00:16:01.470 "data_offset": 2048, 00:16:01.470 "data_size": 63488 00:16:01.470 } 00:16:01.470 ] 00:16:01.470 }' 00:16:01.470 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.470 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.470 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.470 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.470 13:32:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.410 "name": "raid_bdev1", 00:16:02.410 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:02.410 "strip_size_kb": 64, 00:16:02.410 "state": "online", 00:16:02.410 "raid_level": "raid5f", 00:16:02.410 "superblock": true, 00:16:02.410 "num_base_bdevs": 3, 00:16:02.410 "num_base_bdevs_discovered": 3, 00:16:02.410 "num_base_bdevs_operational": 3, 00:16:02.410 "process": { 00:16:02.410 "type": "rebuild", 00:16:02.410 "target": "spare", 00:16:02.410 "progress": { 00:16:02.410 "blocks": 69632, 00:16:02.410 "percent": 54 00:16:02.410 } 00:16:02.410 }, 00:16:02.410 "base_bdevs_list": [ 00:16:02.410 { 00:16:02.410 "name": "spare", 00:16:02.410 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:02.410 "is_configured": true, 00:16:02.410 "data_offset": 2048, 00:16:02.410 "data_size": 63488 00:16:02.410 }, 00:16:02.410 { 00:16:02.410 "name": "BaseBdev2", 00:16:02.410 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:02.410 "is_configured": true, 00:16:02.410 "data_offset": 2048, 00:16:02.410 "data_size": 63488 00:16:02.410 }, 00:16:02.410 { 00:16:02.410 "name": "BaseBdev3", 00:16:02.410 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:02.410 "is_configured": true, 00:16:02.410 "data_offset": 2048, 00:16:02.410 "data_size": 63488 00:16:02.410 } 00:16:02.410 ] 00:16:02.410 }' 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.410 13:32:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.792 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.792 "name": "raid_bdev1", 00:16:03.792 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:03.792 "strip_size_kb": 64, 00:16:03.792 "state": "online", 00:16:03.792 "raid_level": "raid5f", 00:16:03.792 "superblock": true, 00:16:03.792 "num_base_bdevs": 3, 00:16:03.792 "num_base_bdevs_discovered": 3, 00:16:03.792 "num_base_bdevs_operational": 3, 00:16:03.792 "process": { 00:16:03.792 "type": "rebuild", 00:16:03.792 "target": "spare", 00:16:03.792 "progress": { 00:16:03.792 "blocks": 92160, 00:16:03.792 "percent": 72 00:16:03.792 } 00:16:03.792 }, 00:16:03.792 "base_bdevs_list": [ 00:16:03.792 { 00:16:03.792 "name": "spare", 00:16:03.792 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:03.792 "is_configured": true, 00:16:03.792 "data_offset": 2048, 00:16:03.792 "data_size": 63488 00:16:03.792 }, 00:16:03.792 { 00:16:03.792 "name": "BaseBdev2", 00:16:03.792 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:03.792 "is_configured": true, 00:16:03.792 "data_offset": 2048, 00:16:03.792 "data_size": 63488 00:16:03.792 }, 00:16:03.792 { 00:16:03.792 "name": "BaseBdev3", 00:16:03.792 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:03.792 "is_configured": true, 00:16:03.792 "data_offset": 2048, 00:16:03.792 "data_size": 63488 00:16:03.792 } 00:16:03.792 ] 00:16:03.792 }' 00:16:03.793 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.793 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.793 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.793 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.793 13:32:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.732 "name": "raid_bdev1", 00:16:04.732 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:04.732 "strip_size_kb": 64, 00:16:04.732 "state": "online", 00:16:04.732 "raid_level": "raid5f", 00:16:04.732 "superblock": true, 00:16:04.732 "num_base_bdevs": 3, 00:16:04.732 "num_base_bdevs_discovered": 3, 00:16:04.732 "num_base_bdevs_operational": 3, 00:16:04.732 "process": { 00:16:04.732 "type": "rebuild", 00:16:04.732 "target": "spare", 00:16:04.732 "progress": { 00:16:04.732 "blocks": 116736, 00:16:04.732 "percent": 91 00:16:04.732 } 00:16:04.732 }, 00:16:04.732 "base_bdevs_list": [ 00:16:04.732 { 00:16:04.732 "name": "spare", 00:16:04.732 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:04.732 "is_configured": true, 00:16:04.732 "data_offset": 2048, 00:16:04.732 "data_size": 63488 00:16:04.732 }, 00:16:04.732 { 00:16:04.732 "name": "BaseBdev2", 00:16:04.732 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:04.732 "is_configured": true, 00:16:04.732 "data_offset": 2048, 00:16:04.732 "data_size": 63488 00:16:04.732 }, 00:16:04.732 { 00:16:04.732 "name": "BaseBdev3", 00:16:04.732 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:04.732 "is_configured": true, 00:16:04.732 "data_offset": 2048, 00:16:04.732 "data_size": 63488 00:16:04.732 } 00:16:04.732 ] 00:16:04.732 }' 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.732 13:32:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.992 [2024-11-18 13:32:35.040075] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:04.992 [2024-11-18 13:32:35.040212] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:04.992 [2024-11-18 13:32:35.040338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.931 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.931 "name": "raid_bdev1", 00:16:05.931 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:05.931 "strip_size_kb": 64, 00:16:05.931 "state": "online", 00:16:05.931 "raid_level": "raid5f", 00:16:05.931 "superblock": true, 00:16:05.931 "num_base_bdevs": 3, 00:16:05.931 "num_base_bdevs_discovered": 3, 00:16:05.931 "num_base_bdevs_operational": 3, 00:16:05.931 "base_bdevs_list": [ 00:16:05.931 { 00:16:05.931 "name": "spare", 00:16:05.931 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:05.931 "is_configured": true, 00:16:05.931 "data_offset": 2048, 00:16:05.931 "data_size": 63488 00:16:05.931 }, 00:16:05.931 { 00:16:05.931 "name": "BaseBdev2", 00:16:05.931 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:05.931 "is_configured": true, 00:16:05.931 "data_offset": 2048, 00:16:05.931 "data_size": 63488 00:16:05.931 }, 00:16:05.931 { 00:16:05.931 "name": "BaseBdev3", 00:16:05.932 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:05.932 "is_configured": true, 00:16:05.932 "data_offset": 2048, 00:16:05.932 "data_size": 63488 00:16:05.932 } 00:16:05.932 ] 00:16:05.932 }' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.932 "name": "raid_bdev1", 00:16:05.932 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:05.932 "strip_size_kb": 64, 00:16:05.932 "state": "online", 00:16:05.932 "raid_level": "raid5f", 00:16:05.932 "superblock": true, 00:16:05.932 "num_base_bdevs": 3, 00:16:05.932 "num_base_bdevs_discovered": 3, 00:16:05.932 "num_base_bdevs_operational": 3, 00:16:05.932 "base_bdevs_list": [ 00:16:05.932 { 00:16:05.932 "name": "spare", 00:16:05.932 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:05.932 "is_configured": true, 00:16:05.932 "data_offset": 2048, 00:16:05.932 "data_size": 63488 00:16:05.932 }, 00:16:05.932 { 00:16:05.932 "name": "BaseBdev2", 00:16:05.932 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:05.932 "is_configured": true, 00:16:05.932 "data_offset": 2048, 00:16:05.932 "data_size": 63488 00:16:05.932 }, 00:16:05.932 { 00:16:05.932 "name": "BaseBdev3", 00:16:05.932 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:05.932 "is_configured": true, 00:16:05.932 "data_offset": 2048, 00:16:05.932 "data_size": 63488 00:16:05.932 } 00:16:05.932 ] 00:16:05.932 }' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.932 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.192 13:32:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.192 "name": "raid_bdev1", 00:16:06.192 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:06.192 "strip_size_kb": 64, 00:16:06.192 "state": "online", 00:16:06.192 "raid_level": "raid5f", 00:16:06.192 "superblock": true, 00:16:06.192 "num_base_bdevs": 3, 00:16:06.192 "num_base_bdevs_discovered": 3, 00:16:06.192 "num_base_bdevs_operational": 3, 00:16:06.192 "base_bdevs_list": [ 00:16:06.192 { 00:16:06.192 "name": "spare", 00:16:06.192 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:06.192 "is_configured": true, 00:16:06.192 "data_offset": 2048, 00:16:06.192 "data_size": 63488 00:16:06.192 }, 00:16:06.192 { 00:16:06.192 "name": "BaseBdev2", 00:16:06.192 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:06.192 "is_configured": true, 00:16:06.192 "data_offset": 2048, 00:16:06.192 "data_size": 63488 00:16:06.192 }, 00:16:06.192 { 00:16:06.192 "name": "BaseBdev3", 00:16:06.192 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:06.192 "is_configured": true, 00:16:06.192 "data_offset": 2048, 00:16:06.192 "data_size": 63488 00:16:06.192 } 00:16:06.192 ] 00:16:06.192 }' 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.192 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.452 [2024-11-18 13:32:36.459950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.452 [2024-11-18 13:32:36.460022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.452 [2024-11-18 13:32:36.460138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.452 [2024-11-18 13:32:36.460232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.452 [2024-11-18 13:32:36.460286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.452 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.712 /dev/nbd0 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.712 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.972 1+0 records in 00:16:06.972 1+0 records out 00:16:06.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400929 s, 10.2 MB/s 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.972 13:32:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.972 /dev/nbd1 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.972 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.233 1+0 records in 00:16:07.233 1+0 records out 00:16:07.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524904 s, 7.8 MB/s 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.233 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.493 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.753 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.753 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.753 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.753 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 [2024-11-18 13:32:37.681192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.754 [2024-11-18 13:32:37.681308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.754 [2024-11-18 13:32:37.681346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:07.754 [2024-11-18 13:32:37.681377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.754 [2024-11-18 13:32:37.683680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.754 [2024-11-18 13:32:37.683759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.754 [2024-11-18 13:32:37.683872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.754 [2024-11-18 13:32:37.683955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.754 [2024-11-18 13:32:37.684139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.754 [2024-11-18 13:32:37.684285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.754 spare 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.754 [2024-11-18 13:32:37.784210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:07.754 [2024-11-18 13:32:37.784237] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:07.754 [2024-11-18 13:32:37.784476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:07.754 [2024-11-18 13:32:37.789735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:07.754 [2024-11-18 13:32:37.789755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:07.754 [2024-11-18 13:32:37.789936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.754 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.013 "name": "raid_bdev1", 00:16:08.013 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:08.013 "strip_size_kb": 64, 00:16:08.013 "state": "online", 00:16:08.013 "raid_level": "raid5f", 00:16:08.013 "superblock": true, 00:16:08.013 "num_base_bdevs": 3, 00:16:08.013 "num_base_bdevs_discovered": 3, 00:16:08.013 "num_base_bdevs_operational": 3, 00:16:08.013 "base_bdevs_list": [ 00:16:08.013 { 00:16:08.013 "name": "spare", 00:16:08.013 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:08.013 "is_configured": true, 00:16:08.013 "data_offset": 2048, 00:16:08.013 "data_size": 63488 00:16:08.013 }, 00:16:08.013 { 00:16:08.013 "name": "BaseBdev2", 00:16:08.013 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:08.013 "is_configured": true, 00:16:08.013 "data_offset": 2048, 00:16:08.013 "data_size": 63488 00:16:08.013 }, 00:16:08.013 { 00:16:08.013 "name": "BaseBdev3", 00:16:08.013 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:08.013 "is_configured": true, 00:16:08.013 "data_offset": 2048, 00:16:08.013 "data_size": 63488 00:16:08.013 } 00:16:08.013 ] 00:16:08.013 }' 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.013 13:32:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.273 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.533 "name": "raid_bdev1", 00:16:08.533 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:08.533 "strip_size_kb": 64, 00:16:08.533 "state": "online", 00:16:08.533 "raid_level": "raid5f", 00:16:08.533 "superblock": true, 00:16:08.533 "num_base_bdevs": 3, 00:16:08.533 "num_base_bdevs_discovered": 3, 00:16:08.533 "num_base_bdevs_operational": 3, 00:16:08.533 "base_bdevs_list": [ 00:16:08.533 { 00:16:08.533 "name": "spare", 00:16:08.533 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "BaseBdev2", 00:16:08.533 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "BaseBdev3", 00:16:08.533 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 } 00:16:08.533 ] 00:16:08.533 }' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.533 [2024-11-18 13:32:38.495009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.533 "name": "raid_bdev1", 00:16:08.533 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:08.533 "strip_size_kb": 64, 00:16:08.533 "state": "online", 00:16:08.533 "raid_level": "raid5f", 00:16:08.533 "superblock": true, 00:16:08.533 "num_base_bdevs": 3, 00:16:08.533 "num_base_bdevs_discovered": 2, 00:16:08.533 "num_base_bdevs_operational": 2, 00:16:08.533 "base_bdevs_list": [ 00:16:08.533 { 00:16:08.533 "name": null, 00:16:08.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.533 "is_configured": false, 00:16:08.533 "data_offset": 0, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "BaseBdev2", 00:16:08.533 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 }, 00:16:08.533 { 00:16:08.533 "name": "BaseBdev3", 00:16:08.533 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:08.533 "is_configured": true, 00:16:08.533 "data_offset": 2048, 00:16:08.533 "data_size": 63488 00:16:08.533 } 00:16:08.533 ] 00:16:08.533 }' 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.533 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.103 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.103 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.103 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.103 [2024-11-18 13:32:38.978363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.103 [2024-11-18 13:32:38.978519] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.103 [2024-11-18 13:32:38.978534] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.103 [2024-11-18 13:32:38.978569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.103 [2024-11-18 13:32:38.993883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:09.103 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.103 13:32:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:09.103 [2024-11-18 13:32:39.000803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.040 13:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.040 13:32:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.040 "name": "raid_bdev1", 00:16:10.040 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:10.040 "strip_size_kb": 64, 00:16:10.040 "state": "online", 00:16:10.040 "raid_level": "raid5f", 00:16:10.040 "superblock": true, 00:16:10.040 "num_base_bdevs": 3, 00:16:10.040 "num_base_bdevs_discovered": 3, 00:16:10.040 "num_base_bdevs_operational": 3, 00:16:10.040 "process": { 00:16:10.040 "type": "rebuild", 00:16:10.040 "target": "spare", 00:16:10.040 "progress": { 00:16:10.040 "blocks": 20480, 00:16:10.040 "percent": 16 00:16:10.040 } 00:16:10.040 }, 00:16:10.040 "base_bdevs_list": [ 00:16:10.040 { 00:16:10.040 "name": "spare", 00:16:10.040 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:10.040 "is_configured": true, 00:16:10.040 "data_offset": 2048, 00:16:10.040 "data_size": 63488 00:16:10.040 }, 00:16:10.040 { 00:16:10.040 "name": "BaseBdev2", 00:16:10.040 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:10.040 "is_configured": true, 00:16:10.040 "data_offset": 2048, 00:16:10.040 "data_size": 63488 00:16:10.040 }, 00:16:10.040 { 00:16:10.040 "name": "BaseBdev3", 00:16:10.040 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:10.040 "is_configured": true, 00:16:10.040 "data_offset": 2048, 00:16:10.040 "data_size": 63488 00:16:10.040 } 00:16:10.040 ] 00:16:10.040 }' 00:16:10.040 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.300 [2024-11-18 13:32:40.148106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.300 [2024-11-18 13:32:40.208165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.300 [2024-11-18 13:32:40.208280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.300 [2024-11-18 13:32:40.208317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.300 [2024-11-18 13:32:40.208341] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.300 "name": "raid_bdev1", 00:16:10.300 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:10.300 "strip_size_kb": 64, 00:16:10.300 "state": "online", 00:16:10.300 "raid_level": "raid5f", 00:16:10.300 "superblock": true, 00:16:10.300 "num_base_bdevs": 3, 00:16:10.300 "num_base_bdevs_discovered": 2, 00:16:10.300 "num_base_bdevs_operational": 2, 00:16:10.300 "base_bdevs_list": [ 00:16:10.300 { 00:16:10.300 "name": null, 00:16:10.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.300 "is_configured": false, 00:16:10.300 "data_offset": 0, 00:16:10.300 "data_size": 63488 00:16:10.300 }, 00:16:10.300 { 00:16:10.300 "name": "BaseBdev2", 00:16:10.300 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:10.300 "is_configured": true, 00:16:10.300 "data_offset": 2048, 00:16:10.300 "data_size": 63488 00:16:10.300 }, 00:16:10.300 { 00:16:10.300 "name": "BaseBdev3", 00:16:10.300 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:10.300 "is_configured": true, 00:16:10.300 "data_offset": 2048, 00:16:10.300 "data_size": 63488 00:16:10.300 } 00:16:10.300 ] 00:16:10.300 }' 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.300 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.870 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.870 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.870 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.870 [2024-11-18 13:32:40.716601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.870 [2024-11-18 13:32:40.716657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.870 [2024-11-18 13:32:40.716677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:10.870 [2024-11-18 13:32:40.716689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.870 [2024-11-18 13:32:40.717153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.870 [2024-11-18 13:32:40.717175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.870 [2024-11-18 13:32:40.717256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.870 [2024-11-18 13:32:40.717270] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.870 [2024-11-18 13:32:40.717279] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.870 [2024-11-18 13:32:40.717299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.870 [2024-11-18 13:32:40.732101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:10.870 spare 00:16:10.870 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.870 13:32:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.870 [2024-11-18 13:32:40.739204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.810 "name": "raid_bdev1", 00:16:11.810 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:11.810 "strip_size_kb": 64, 00:16:11.810 "state": "online", 00:16:11.810 "raid_level": "raid5f", 00:16:11.810 "superblock": true, 00:16:11.810 "num_base_bdevs": 3, 00:16:11.810 "num_base_bdevs_discovered": 3, 00:16:11.810 "num_base_bdevs_operational": 3, 00:16:11.810 "process": { 00:16:11.810 "type": "rebuild", 00:16:11.810 "target": "spare", 00:16:11.810 "progress": { 00:16:11.810 "blocks": 20480, 00:16:11.810 "percent": 16 00:16:11.810 } 00:16:11.810 }, 00:16:11.810 "base_bdevs_list": [ 00:16:11.810 { 00:16:11.810 "name": "spare", 00:16:11.810 "uuid": "e424de69-e2d5-5284-8655-6ef981caa054", 00:16:11.810 "is_configured": true, 00:16:11.810 "data_offset": 2048, 00:16:11.810 "data_size": 63488 00:16:11.810 }, 00:16:11.810 { 00:16:11.810 "name": "BaseBdev2", 00:16:11.810 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:11.810 "is_configured": true, 00:16:11.810 "data_offset": 2048, 00:16:11.810 "data_size": 63488 00:16:11.810 }, 00:16:11.810 { 00:16:11.810 "name": "BaseBdev3", 00:16:11.810 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:11.810 "is_configured": true, 00:16:11.810 "data_offset": 2048, 00:16:11.810 "data_size": 63488 00:16:11.810 } 00:16:11.810 ] 00:16:11.810 }' 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.810 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.071 [2024-11-18 13:32:41.874934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.071 [2024-11-18 13:32:41.946344] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.071 [2024-11-18 13:32:41.946393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.071 [2024-11-18 13:32:41.946410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.071 [2024-11-18 13:32:41.946416] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.071 13:32:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.071 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.071 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.071 "name": "raid_bdev1", 00:16:12.071 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:12.071 "strip_size_kb": 64, 00:16:12.071 "state": "online", 00:16:12.071 "raid_level": "raid5f", 00:16:12.071 "superblock": true, 00:16:12.071 "num_base_bdevs": 3, 00:16:12.071 "num_base_bdevs_discovered": 2, 00:16:12.071 "num_base_bdevs_operational": 2, 00:16:12.071 "base_bdevs_list": [ 00:16:12.071 { 00:16:12.071 "name": null, 00:16:12.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.071 "is_configured": false, 00:16:12.071 "data_offset": 0, 00:16:12.071 "data_size": 63488 00:16:12.071 }, 00:16:12.071 { 00:16:12.071 "name": "BaseBdev2", 00:16:12.071 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:12.071 "is_configured": true, 00:16:12.071 "data_offset": 2048, 00:16:12.071 "data_size": 63488 00:16:12.071 }, 00:16:12.071 { 00:16:12.071 "name": "BaseBdev3", 00:16:12.071 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:12.071 "is_configured": true, 00:16:12.071 "data_offset": 2048, 00:16:12.071 "data_size": 63488 00:16:12.071 } 00:16:12.071 ] 00:16:12.071 }' 00:16:12.071 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.071 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.641 "name": "raid_bdev1", 00:16:12.641 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:12.641 "strip_size_kb": 64, 00:16:12.641 "state": "online", 00:16:12.641 "raid_level": "raid5f", 00:16:12.641 "superblock": true, 00:16:12.641 "num_base_bdevs": 3, 00:16:12.641 "num_base_bdevs_discovered": 2, 00:16:12.641 "num_base_bdevs_operational": 2, 00:16:12.641 "base_bdevs_list": [ 00:16:12.641 { 00:16:12.641 "name": null, 00:16:12.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.641 "is_configured": false, 00:16:12.641 "data_offset": 0, 00:16:12.641 "data_size": 63488 00:16:12.641 }, 00:16:12.641 { 00:16:12.641 "name": "BaseBdev2", 00:16:12.641 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:12.641 "is_configured": true, 00:16:12.641 "data_offset": 2048, 00:16:12.641 "data_size": 63488 00:16:12.641 }, 00:16:12.641 { 00:16:12.641 "name": "BaseBdev3", 00:16:12.641 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:12.641 "is_configured": true, 00:16:12.641 "data_offset": 2048, 00:16:12.641 "data_size": 63488 00:16:12.641 } 00:16:12.641 ] 00:16:12.641 }' 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.641 [2024-11-18 13:32:42.586309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:12.641 [2024-11-18 13:32:42.586408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.641 [2024-11-18 13:32:42.586451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:12.641 [2024-11-18 13:32:42.586460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.641 [2024-11-18 13:32:42.586898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.641 [2024-11-18 13:32:42.586917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:12.641 [2024-11-18 13:32:42.586992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:12.641 [2024-11-18 13:32:42.587007] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:12.641 [2024-11-18 13:32:42.587026] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:12.641 [2024-11-18 13:32:42.587035] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:12.641 BaseBdev1 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.641 13:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.580 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.840 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.840 "name": "raid_bdev1", 00:16:13.840 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:13.840 "strip_size_kb": 64, 00:16:13.840 "state": "online", 00:16:13.840 "raid_level": "raid5f", 00:16:13.840 "superblock": true, 00:16:13.840 "num_base_bdevs": 3, 00:16:13.840 "num_base_bdevs_discovered": 2, 00:16:13.840 "num_base_bdevs_operational": 2, 00:16:13.840 "base_bdevs_list": [ 00:16:13.840 { 00:16:13.840 "name": null, 00:16:13.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.840 "is_configured": false, 00:16:13.840 "data_offset": 0, 00:16:13.840 "data_size": 63488 00:16:13.840 }, 00:16:13.840 { 00:16:13.840 "name": "BaseBdev2", 00:16:13.840 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:13.840 "is_configured": true, 00:16:13.840 "data_offset": 2048, 00:16:13.840 "data_size": 63488 00:16:13.840 }, 00:16:13.840 { 00:16:13.840 "name": "BaseBdev3", 00:16:13.840 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:13.840 "is_configured": true, 00:16:13.840 "data_offset": 2048, 00:16:13.840 "data_size": 63488 00:16:13.840 } 00:16:13.840 ] 00:16:13.840 }' 00:16:13.840 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.840 13:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.100 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.100 "name": "raid_bdev1", 00:16:14.100 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:14.100 "strip_size_kb": 64, 00:16:14.100 "state": "online", 00:16:14.100 "raid_level": "raid5f", 00:16:14.100 "superblock": true, 00:16:14.100 "num_base_bdevs": 3, 00:16:14.100 "num_base_bdevs_discovered": 2, 00:16:14.100 "num_base_bdevs_operational": 2, 00:16:14.100 "base_bdevs_list": [ 00:16:14.100 { 00:16:14.100 "name": null, 00:16:14.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.100 "is_configured": false, 00:16:14.101 "data_offset": 0, 00:16:14.101 "data_size": 63488 00:16:14.101 }, 00:16:14.101 { 00:16:14.101 "name": "BaseBdev2", 00:16:14.101 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:14.101 "is_configured": true, 00:16:14.101 "data_offset": 2048, 00:16:14.101 "data_size": 63488 00:16:14.101 }, 00:16:14.101 { 00:16:14.101 "name": "BaseBdev3", 00:16:14.101 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:14.101 "is_configured": true, 00:16:14.101 "data_offset": 2048, 00:16:14.101 "data_size": 63488 00:16:14.101 } 00:16:14.101 ] 00:16:14.101 }' 00:16:14.101 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.360 [2024-11-18 13:32:44.227628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.360 [2024-11-18 13:32:44.227827] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:14.360 [2024-11-18 13:32:44.227889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:14.360 request: 00:16:14.360 { 00:16:14.360 "base_bdev": "BaseBdev1", 00:16:14.360 "raid_bdev": "raid_bdev1", 00:16:14.360 "method": "bdev_raid_add_base_bdev", 00:16:14.360 "req_id": 1 00:16:14.360 } 00:16:14.360 Got JSON-RPC error response 00:16:14.360 response: 00:16:14.360 { 00:16:14.360 "code": -22, 00:16:14.360 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:14.360 } 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:14.360 13:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.299 "name": "raid_bdev1", 00:16:15.299 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:15.299 "strip_size_kb": 64, 00:16:15.299 "state": "online", 00:16:15.299 "raid_level": "raid5f", 00:16:15.299 "superblock": true, 00:16:15.299 "num_base_bdevs": 3, 00:16:15.299 "num_base_bdevs_discovered": 2, 00:16:15.299 "num_base_bdevs_operational": 2, 00:16:15.299 "base_bdevs_list": [ 00:16:15.299 { 00:16:15.299 "name": null, 00:16:15.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.299 "is_configured": false, 00:16:15.299 "data_offset": 0, 00:16:15.299 "data_size": 63488 00:16:15.299 }, 00:16:15.299 { 00:16:15.299 "name": "BaseBdev2", 00:16:15.299 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:15.299 "is_configured": true, 00:16:15.299 "data_offset": 2048, 00:16:15.299 "data_size": 63488 00:16:15.299 }, 00:16:15.299 { 00:16:15.299 "name": "BaseBdev3", 00:16:15.299 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:15.299 "is_configured": true, 00:16:15.299 "data_offset": 2048, 00:16:15.299 "data_size": 63488 00:16:15.299 } 00:16:15.299 ] 00:16:15.299 }' 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.299 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.869 "name": "raid_bdev1", 00:16:15.869 "uuid": "82393736-e8f6-477e-8025-c0463f5cee38", 00:16:15.869 "strip_size_kb": 64, 00:16:15.869 "state": "online", 00:16:15.869 "raid_level": "raid5f", 00:16:15.869 "superblock": true, 00:16:15.869 "num_base_bdevs": 3, 00:16:15.869 "num_base_bdevs_discovered": 2, 00:16:15.869 "num_base_bdevs_operational": 2, 00:16:15.869 "base_bdevs_list": [ 00:16:15.869 { 00:16:15.869 "name": null, 00:16:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.869 "is_configured": false, 00:16:15.869 "data_offset": 0, 00:16:15.869 "data_size": 63488 00:16:15.869 }, 00:16:15.869 { 00:16:15.869 "name": "BaseBdev2", 00:16:15.869 "uuid": "bc059a3e-ff8f-5f37-88e0-02b9fbdbad07", 00:16:15.869 "is_configured": true, 00:16:15.869 "data_offset": 2048, 00:16:15.869 "data_size": 63488 00:16:15.869 }, 00:16:15.869 { 00:16:15.869 "name": "BaseBdev3", 00:16:15.869 "uuid": "f12b75da-cef8-5da6-bcbf-e5d1071a834f", 00:16:15.869 "is_configured": true, 00:16:15.869 "data_offset": 2048, 00:16:15.869 "data_size": 63488 00:16:15.869 } 00:16:15.869 ] 00:16:15.869 }' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81961 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81961 ']' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81961 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81961 00:16:15.869 killing process with pid 81961 00:16:15.869 Received shutdown signal, test time was about 60.000000 seconds 00:16:15.869 00:16:15.869 Latency(us) 00:16:15.869 [2024-11-18T13:32:45.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.869 [2024-11-18T13:32:45.923Z] =================================================================================================================== 00:16:15.869 [2024-11-18T13:32:45.923Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81961' 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81961 00:16:15.869 [2024-11-18 13:32:45.859878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.869 [2024-11-18 13:32:45.859992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.869 13:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81961 00:16:15.869 [2024-11-18 13:32:45.860051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.869 [2024-11-18 13:32:45.860061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:16.444 [2024-11-18 13:32:46.231089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.409 13:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.409 00:16:17.409 real 0m23.636s 00:16:17.409 user 0m30.283s 00:16:17.409 sys 0m3.120s 00:16:17.409 13:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.409 ************************************ 00:16:17.409 END TEST raid5f_rebuild_test_sb 00:16:17.409 ************************************ 00:16:17.409 13:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 13:32:47 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:17.409 13:32:47 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:17.409 13:32:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:17.409 13:32:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.409 13:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 ************************************ 00:16:17.409 START TEST raid5f_state_function_test 00:16:17.409 ************************************ 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82714 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82714' 00:16:17.409 Process raid pid: 82714 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82714 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82714 ']' 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.409 13:32:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.409 [2024-11-18 13:32:47.452342] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:17.409 [2024-11-18 13:32:47.452554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.668 [2024-11-18 13:32:47.627737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.928 [2024-11-18 13:32:47.733195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.928 [2024-11-18 13:32:47.924972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.928 [2024-11-18 13:32:47.925056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 [2024-11-18 13:32:48.276106] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.498 [2024-11-18 13:32:48.276233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.498 [2024-11-18 13:32:48.276264] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.498 [2024-11-18 13:32:48.276286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.498 [2024-11-18 13:32:48.276304] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.498 [2024-11-18 13:32:48.276324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.498 [2024-11-18 13:32:48.276341] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.498 [2024-11-18 13:32:48.276360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.498 "name": "Existed_Raid", 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "strip_size_kb": 64, 00:16:18.498 "state": "configuring", 00:16:18.498 "raid_level": "raid5f", 00:16:18.498 "superblock": false, 00:16:18.498 "num_base_bdevs": 4, 00:16:18.498 "num_base_bdevs_discovered": 0, 00:16:18.498 "num_base_bdevs_operational": 4, 00:16:18.498 "base_bdevs_list": [ 00:16:18.498 { 00:16:18.498 "name": "BaseBdev1", 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 0, 00:16:18.498 "data_size": 0 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": "BaseBdev2", 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 0, 00:16:18.498 "data_size": 0 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": "BaseBdev3", 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 0, 00:16:18.498 "data_size": 0 00:16:18.498 }, 00:16:18.498 { 00:16:18.498 "name": "BaseBdev4", 00:16:18.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.498 "is_configured": false, 00:16:18.498 "data_offset": 0, 00:16:18.498 "data_size": 0 00:16:18.498 } 00:16:18.498 ] 00:16:18.498 }' 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.498 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.759 [2024-11-18 13:32:48.767213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.759 [2024-11-18 13:32:48.767245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.759 [2024-11-18 13:32:48.779195] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.759 [2024-11-18 13:32:48.779233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.759 [2024-11-18 13:32:48.779241] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.759 [2024-11-18 13:32:48.779250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.759 [2024-11-18 13:32:48.779256] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.759 [2024-11-18 13:32:48.779264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.759 [2024-11-18 13:32:48.779270] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.759 [2024-11-18 13:32:48.779278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.759 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.019 [2024-11-18 13:32:48.827863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.019 BaseBdev1 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.019 [ 00:16:19.019 { 00:16:19.019 "name": "BaseBdev1", 00:16:19.019 "aliases": [ 00:16:19.019 "469b7020-86a4-4ece-aa41-6adbe2a7f5d8" 00:16:19.019 ], 00:16:19.019 "product_name": "Malloc disk", 00:16:19.019 "block_size": 512, 00:16:19.019 "num_blocks": 65536, 00:16:19.019 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:19.019 "assigned_rate_limits": { 00:16:19.019 "rw_ios_per_sec": 0, 00:16:19.019 "rw_mbytes_per_sec": 0, 00:16:19.019 "r_mbytes_per_sec": 0, 00:16:19.019 "w_mbytes_per_sec": 0 00:16:19.019 }, 00:16:19.019 "claimed": true, 00:16:19.019 "claim_type": "exclusive_write", 00:16:19.019 "zoned": false, 00:16:19.019 "supported_io_types": { 00:16:19.019 "read": true, 00:16:19.019 "write": true, 00:16:19.019 "unmap": true, 00:16:19.019 "flush": true, 00:16:19.019 "reset": true, 00:16:19.019 "nvme_admin": false, 00:16:19.019 "nvme_io": false, 00:16:19.019 "nvme_io_md": false, 00:16:19.019 "write_zeroes": true, 00:16:19.019 "zcopy": true, 00:16:19.019 "get_zone_info": false, 00:16:19.019 "zone_management": false, 00:16:19.019 "zone_append": false, 00:16:19.019 "compare": false, 00:16:19.019 "compare_and_write": false, 00:16:19.019 "abort": true, 00:16:19.019 "seek_hole": false, 00:16:19.019 "seek_data": false, 00:16:19.019 "copy": true, 00:16:19.019 "nvme_iov_md": false 00:16:19.019 }, 00:16:19.019 "memory_domains": [ 00:16:19.019 { 00:16:19.019 "dma_device_id": "system", 00:16:19.019 "dma_device_type": 1 00:16:19.019 }, 00:16:19.019 { 00:16:19.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.019 "dma_device_type": 2 00:16:19.019 } 00:16:19.019 ], 00:16:19.019 "driver_specific": {} 00:16:19.019 } 00:16:19.019 ] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.019 "name": "Existed_Raid", 00:16:19.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.019 "strip_size_kb": 64, 00:16:19.019 "state": "configuring", 00:16:19.019 "raid_level": "raid5f", 00:16:19.019 "superblock": false, 00:16:19.019 "num_base_bdevs": 4, 00:16:19.019 "num_base_bdevs_discovered": 1, 00:16:19.019 "num_base_bdevs_operational": 4, 00:16:19.019 "base_bdevs_list": [ 00:16:19.019 { 00:16:19.019 "name": "BaseBdev1", 00:16:19.019 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:19.019 "is_configured": true, 00:16:19.019 "data_offset": 0, 00:16:19.019 "data_size": 65536 00:16:19.019 }, 00:16:19.019 { 00:16:19.019 "name": "BaseBdev2", 00:16:19.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.019 "is_configured": false, 00:16:19.019 "data_offset": 0, 00:16:19.019 "data_size": 0 00:16:19.019 }, 00:16:19.019 { 00:16:19.019 "name": "BaseBdev3", 00:16:19.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.019 "is_configured": false, 00:16:19.019 "data_offset": 0, 00:16:19.019 "data_size": 0 00:16:19.019 }, 00:16:19.019 { 00:16:19.019 "name": "BaseBdev4", 00:16:19.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.019 "is_configured": false, 00:16:19.019 "data_offset": 0, 00:16:19.019 "data_size": 0 00:16:19.019 } 00:16:19.019 ] 00:16:19.019 }' 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.019 13:32:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.278 [2024-11-18 13:32:49.207238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.278 [2024-11-18 13:32:49.207314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.278 [2024-11-18 13:32:49.219301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.278 [2024-11-18 13:32:49.221073] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.278 [2024-11-18 13:32:49.221162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.278 [2024-11-18 13:32:49.221192] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.278 [2024-11-18 13:32:49.221216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.278 [2024-11-18 13:32:49.221235] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:19.278 [2024-11-18 13:32:49.221254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.278 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.278 "name": "Existed_Raid", 00:16:19.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.278 "strip_size_kb": 64, 00:16:19.278 "state": "configuring", 00:16:19.278 "raid_level": "raid5f", 00:16:19.278 "superblock": false, 00:16:19.278 "num_base_bdevs": 4, 00:16:19.278 "num_base_bdevs_discovered": 1, 00:16:19.278 "num_base_bdevs_operational": 4, 00:16:19.278 "base_bdevs_list": [ 00:16:19.278 { 00:16:19.278 "name": "BaseBdev1", 00:16:19.278 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:19.278 "is_configured": true, 00:16:19.278 "data_offset": 0, 00:16:19.278 "data_size": 65536 00:16:19.278 }, 00:16:19.279 { 00:16:19.279 "name": "BaseBdev2", 00:16:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.279 "is_configured": false, 00:16:19.279 "data_offset": 0, 00:16:19.279 "data_size": 0 00:16:19.279 }, 00:16:19.279 { 00:16:19.279 "name": "BaseBdev3", 00:16:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.279 "is_configured": false, 00:16:19.279 "data_offset": 0, 00:16:19.279 "data_size": 0 00:16:19.279 }, 00:16:19.279 { 00:16:19.279 "name": "BaseBdev4", 00:16:19.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.279 "is_configured": false, 00:16:19.279 "data_offset": 0, 00:16:19.279 "data_size": 0 00:16:19.279 } 00:16:19.279 ] 00:16:19.279 }' 00:16:19.279 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.279 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.847 [2024-11-18 13:32:49.684164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.847 BaseBdev2 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.847 [ 00:16:19.847 { 00:16:19.847 "name": "BaseBdev2", 00:16:19.847 "aliases": [ 00:16:19.847 "a8a414f8-a3dd-4a81-a8fe-71e294d07715" 00:16:19.847 ], 00:16:19.847 "product_name": "Malloc disk", 00:16:19.847 "block_size": 512, 00:16:19.847 "num_blocks": 65536, 00:16:19.847 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:19.847 "assigned_rate_limits": { 00:16:19.847 "rw_ios_per_sec": 0, 00:16:19.847 "rw_mbytes_per_sec": 0, 00:16:19.847 "r_mbytes_per_sec": 0, 00:16:19.847 "w_mbytes_per_sec": 0 00:16:19.847 }, 00:16:19.847 "claimed": true, 00:16:19.847 "claim_type": "exclusive_write", 00:16:19.847 "zoned": false, 00:16:19.847 "supported_io_types": { 00:16:19.847 "read": true, 00:16:19.847 "write": true, 00:16:19.847 "unmap": true, 00:16:19.847 "flush": true, 00:16:19.847 "reset": true, 00:16:19.847 "nvme_admin": false, 00:16:19.847 "nvme_io": false, 00:16:19.847 "nvme_io_md": false, 00:16:19.847 "write_zeroes": true, 00:16:19.847 "zcopy": true, 00:16:19.847 "get_zone_info": false, 00:16:19.847 "zone_management": false, 00:16:19.847 "zone_append": false, 00:16:19.847 "compare": false, 00:16:19.847 "compare_and_write": false, 00:16:19.847 "abort": true, 00:16:19.847 "seek_hole": false, 00:16:19.847 "seek_data": false, 00:16:19.847 "copy": true, 00:16:19.847 "nvme_iov_md": false 00:16:19.847 }, 00:16:19.847 "memory_domains": [ 00:16:19.847 { 00:16:19.847 "dma_device_id": "system", 00:16:19.847 "dma_device_type": 1 00:16:19.847 }, 00:16:19.847 { 00:16:19.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.847 "dma_device_type": 2 00:16:19.847 } 00:16:19.847 ], 00:16:19.847 "driver_specific": {} 00:16:19.847 } 00:16:19.847 ] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.847 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.847 "name": "Existed_Raid", 00:16:19.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.847 "strip_size_kb": 64, 00:16:19.847 "state": "configuring", 00:16:19.847 "raid_level": "raid5f", 00:16:19.847 "superblock": false, 00:16:19.848 "num_base_bdevs": 4, 00:16:19.848 "num_base_bdevs_discovered": 2, 00:16:19.848 "num_base_bdevs_operational": 4, 00:16:19.848 "base_bdevs_list": [ 00:16:19.848 { 00:16:19.848 "name": "BaseBdev1", 00:16:19.848 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:19.848 "is_configured": true, 00:16:19.848 "data_offset": 0, 00:16:19.848 "data_size": 65536 00:16:19.848 }, 00:16:19.848 { 00:16:19.848 "name": "BaseBdev2", 00:16:19.848 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:19.848 "is_configured": true, 00:16:19.848 "data_offset": 0, 00:16:19.848 "data_size": 65536 00:16:19.848 }, 00:16:19.848 { 00:16:19.848 "name": "BaseBdev3", 00:16:19.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.848 "is_configured": false, 00:16:19.848 "data_offset": 0, 00:16:19.848 "data_size": 0 00:16:19.848 }, 00:16:19.848 { 00:16:19.848 "name": "BaseBdev4", 00:16:19.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.848 "is_configured": false, 00:16:19.848 "data_offset": 0, 00:16:19.848 "data_size": 0 00:16:19.848 } 00:16:19.848 ] 00:16:19.848 }' 00:16:19.848 13:32:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.848 13:32:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.107 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.107 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.107 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.368 [2024-11-18 13:32:50.181798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.368 BaseBdev3 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.368 [ 00:16:20.368 { 00:16:20.368 "name": "BaseBdev3", 00:16:20.368 "aliases": [ 00:16:20.368 "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d" 00:16:20.368 ], 00:16:20.368 "product_name": "Malloc disk", 00:16:20.368 "block_size": 512, 00:16:20.368 "num_blocks": 65536, 00:16:20.368 "uuid": "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d", 00:16:20.368 "assigned_rate_limits": { 00:16:20.368 "rw_ios_per_sec": 0, 00:16:20.368 "rw_mbytes_per_sec": 0, 00:16:20.368 "r_mbytes_per_sec": 0, 00:16:20.368 "w_mbytes_per_sec": 0 00:16:20.368 }, 00:16:20.368 "claimed": true, 00:16:20.368 "claim_type": "exclusive_write", 00:16:20.368 "zoned": false, 00:16:20.368 "supported_io_types": { 00:16:20.368 "read": true, 00:16:20.368 "write": true, 00:16:20.368 "unmap": true, 00:16:20.368 "flush": true, 00:16:20.368 "reset": true, 00:16:20.368 "nvme_admin": false, 00:16:20.368 "nvme_io": false, 00:16:20.368 "nvme_io_md": false, 00:16:20.368 "write_zeroes": true, 00:16:20.368 "zcopy": true, 00:16:20.368 "get_zone_info": false, 00:16:20.368 "zone_management": false, 00:16:20.368 "zone_append": false, 00:16:20.368 "compare": false, 00:16:20.368 "compare_and_write": false, 00:16:20.368 "abort": true, 00:16:20.368 "seek_hole": false, 00:16:20.368 "seek_data": false, 00:16:20.368 "copy": true, 00:16:20.368 "nvme_iov_md": false 00:16:20.368 }, 00:16:20.368 "memory_domains": [ 00:16:20.368 { 00:16:20.368 "dma_device_id": "system", 00:16:20.368 "dma_device_type": 1 00:16:20.368 }, 00:16:20.368 { 00:16:20.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.368 "dma_device_type": 2 00:16:20.368 } 00:16:20.368 ], 00:16:20.368 "driver_specific": {} 00:16:20.368 } 00:16:20.368 ] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.368 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.368 "name": "Existed_Raid", 00:16:20.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.368 "strip_size_kb": 64, 00:16:20.368 "state": "configuring", 00:16:20.368 "raid_level": "raid5f", 00:16:20.368 "superblock": false, 00:16:20.368 "num_base_bdevs": 4, 00:16:20.368 "num_base_bdevs_discovered": 3, 00:16:20.368 "num_base_bdevs_operational": 4, 00:16:20.368 "base_bdevs_list": [ 00:16:20.368 { 00:16:20.368 "name": "BaseBdev1", 00:16:20.368 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:20.368 "is_configured": true, 00:16:20.368 "data_offset": 0, 00:16:20.368 "data_size": 65536 00:16:20.368 }, 00:16:20.368 { 00:16:20.368 "name": "BaseBdev2", 00:16:20.369 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:20.369 "is_configured": true, 00:16:20.369 "data_offset": 0, 00:16:20.369 "data_size": 65536 00:16:20.369 }, 00:16:20.369 { 00:16:20.369 "name": "BaseBdev3", 00:16:20.369 "uuid": "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d", 00:16:20.369 "is_configured": true, 00:16:20.369 "data_offset": 0, 00:16:20.369 "data_size": 65536 00:16:20.369 }, 00:16:20.369 { 00:16:20.369 "name": "BaseBdev4", 00:16:20.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.369 "is_configured": false, 00:16:20.369 "data_offset": 0, 00:16:20.369 "data_size": 0 00:16:20.369 } 00:16:20.369 ] 00:16:20.369 }' 00:16:20.369 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.369 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.628 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.628 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.628 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.888 [2024-11-18 13:32:50.692587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.888 [2024-11-18 13:32:50.692656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:20.888 [2024-11-18 13:32:50.692665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:20.888 [2024-11-18 13:32:50.692905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:20.888 [2024-11-18 13:32:50.699600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:20.888 [2024-11-18 13:32:50.699624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:20.888 [2024-11-18 13:32:50.699896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.888 BaseBdev4 00:16:20.888 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.889 [ 00:16:20.889 { 00:16:20.889 "name": "BaseBdev4", 00:16:20.889 "aliases": [ 00:16:20.889 "170b5d62-5e73-490d-9046-06421ee55cb6" 00:16:20.889 ], 00:16:20.889 "product_name": "Malloc disk", 00:16:20.889 "block_size": 512, 00:16:20.889 "num_blocks": 65536, 00:16:20.889 "uuid": "170b5d62-5e73-490d-9046-06421ee55cb6", 00:16:20.889 "assigned_rate_limits": { 00:16:20.889 "rw_ios_per_sec": 0, 00:16:20.889 "rw_mbytes_per_sec": 0, 00:16:20.889 "r_mbytes_per_sec": 0, 00:16:20.889 "w_mbytes_per_sec": 0 00:16:20.889 }, 00:16:20.889 "claimed": true, 00:16:20.889 "claim_type": "exclusive_write", 00:16:20.889 "zoned": false, 00:16:20.889 "supported_io_types": { 00:16:20.889 "read": true, 00:16:20.889 "write": true, 00:16:20.889 "unmap": true, 00:16:20.889 "flush": true, 00:16:20.889 "reset": true, 00:16:20.889 "nvme_admin": false, 00:16:20.889 "nvme_io": false, 00:16:20.889 "nvme_io_md": false, 00:16:20.889 "write_zeroes": true, 00:16:20.889 "zcopy": true, 00:16:20.889 "get_zone_info": false, 00:16:20.889 "zone_management": false, 00:16:20.889 "zone_append": false, 00:16:20.889 "compare": false, 00:16:20.889 "compare_and_write": false, 00:16:20.889 "abort": true, 00:16:20.889 "seek_hole": false, 00:16:20.889 "seek_data": false, 00:16:20.889 "copy": true, 00:16:20.889 "nvme_iov_md": false 00:16:20.889 }, 00:16:20.889 "memory_domains": [ 00:16:20.889 { 00:16:20.889 "dma_device_id": "system", 00:16:20.889 "dma_device_type": 1 00:16:20.889 }, 00:16:20.889 { 00:16:20.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.889 "dma_device_type": 2 00:16:20.889 } 00:16:20.889 ], 00:16:20.889 "driver_specific": {} 00:16:20.889 } 00:16:20.889 ] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.889 "name": "Existed_Raid", 00:16:20.889 "uuid": "0cd2bc67-ef66-4634-9ad4-df42aef93e41", 00:16:20.889 "strip_size_kb": 64, 00:16:20.889 "state": "online", 00:16:20.889 "raid_level": "raid5f", 00:16:20.889 "superblock": false, 00:16:20.889 "num_base_bdevs": 4, 00:16:20.889 "num_base_bdevs_discovered": 4, 00:16:20.889 "num_base_bdevs_operational": 4, 00:16:20.889 "base_bdevs_list": [ 00:16:20.889 { 00:16:20.889 "name": "BaseBdev1", 00:16:20.889 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:20.889 "is_configured": true, 00:16:20.889 "data_offset": 0, 00:16:20.889 "data_size": 65536 00:16:20.889 }, 00:16:20.889 { 00:16:20.889 "name": "BaseBdev2", 00:16:20.889 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:20.889 "is_configured": true, 00:16:20.889 "data_offset": 0, 00:16:20.889 "data_size": 65536 00:16:20.889 }, 00:16:20.889 { 00:16:20.889 "name": "BaseBdev3", 00:16:20.889 "uuid": "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d", 00:16:20.889 "is_configured": true, 00:16:20.889 "data_offset": 0, 00:16:20.889 "data_size": 65536 00:16:20.889 }, 00:16:20.889 { 00:16:20.889 "name": "BaseBdev4", 00:16:20.889 "uuid": "170b5d62-5e73-490d-9046-06421ee55cb6", 00:16:20.889 "is_configured": true, 00:16:20.889 "data_offset": 0, 00:16:20.889 "data_size": 65536 00:16:20.889 } 00:16:20.889 ] 00:16:20.889 }' 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.889 13:32:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.459 [2024-11-18 13:32:51.223092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.459 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.459 "name": "Existed_Raid", 00:16:21.459 "aliases": [ 00:16:21.459 "0cd2bc67-ef66-4634-9ad4-df42aef93e41" 00:16:21.459 ], 00:16:21.459 "product_name": "Raid Volume", 00:16:21.459 "block_size": 512, 00:16:21.459 "num_blocks": 196608, 00:16:21.459 "uuid": "0cd2bc67-ef66-4634-9ad4-df42aef93e41", 00:16:21.459 "assigned_rate_limits": { 00:16:21.459 "rw_ios_per_sec": 0, 00:16:21.459 "rw_mbytes_per_sec": 0, 00:16:21.459 "r_mbytes_per_sec": 0, 00:16:21.459 "w_mbytes_per_sec": 0 00:16:21.459 }, 00:16:21.459 "claimed": false, 00:16:21.459 "zoned": false, 00:16:21.459 "supported_io_types": { 00:16:21.459 "read": true, 00:16:21.459 "write": true, 00:16:21.459 "unmap": false, 00:16:21.459 "flush": false, 00:16:21.459 "reset": true, 00:16:21.459 "nvme_admin": false, 00:16:21.459 "nvme_io": false, 00:16:21.459 "nvme_io_md": false, 00:16:21.459 "write_zeroes": true, 00:16:21.459 "zcopy": false, 00:16:21.459 "get_zone_info": false, 00:16:21.459 "zone_management": false, 00:16:21.459 "zone_append": false, 00:16:21.459 "compare": false, 00:16:21.459 "compare_and_write": false, 00:16:21.459 "abort": false, 00:16:21.459 "seek_hole": false, 00:16:21.459 "seek_data": false, 00:16:21.459 "copy": false, 00:16:21.459 "nvme_iov_md": false 00:16:21.459 }, 00:16:21.459 "driver_specific": { 00:16:21.459 "raid": { 00:16:21.459 "uuid": "0cd2bc67-ef66-4634-9ad4-df42aef93e41", 00:16:21.459 "strip_size_kb": 64, 00:16:21.459 "state": "online", 00:16:21.459 "raid_level": "raid5f", 00:16:21.459 "superblock": false, 00:16:21.459 "num_base_bdevs": 4, 00:16:21.459 "num_base_bdevs_discovered": 4, 00:16:21.459 "num_base_bdevs_operational": 4, 00:16:21.459 "base_bdevs_list": [ 00:16:21.459 { 00:16:21.459 "name": "BaseBdev1", 00:16:21.459 "uuid": "469b7020-86a4-4ece-aa41-6adbe2a7f5d8", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 0, 00:16:21.459 "data_size": 65536 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "name": "BaseBdev2", 00:16:21.459 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 0, 00:16:21.459 "data_size": 65536 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "name": "BaseBdev3", 00:16:21.459 "uuid": "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 0, 00:16:21.459 "data_size": 65536 00:16:21.459 }, 00:16:21.459 { 00:16:21.459 "name": "BaseBdev4", 00:16:21.459 "uuid": "170b5d62-5e73-490d-9046-06421ee55cb6", 00:16:21.459 "is_configured": true, 00:16:21.459 "data_offset": 0, 00:16:21.459 "data_size": 65536 00:16:21.459 } 00:16:21.459 ] 00:16:21.459 } 00:16:21.459 } 00:16:21.459 }' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:21.460 BaseBdev2 00:16:21.460 BaseBdev3 00:16:21.460 BaseBdev4' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.721 [2024-11-18 13:32:51.534491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.721 "name": "Existed_Raid", 00:16:21.721 "uuid": "0cd2bc67-ef66-4634-9ad4-df42aef93e41", 00:16:21.721 "strip_size_kb": 64, 00:16:21.721 "state": "online", 00:16:21.721 "raid_level": "raid5f", 00:16:21.721 "superblock": false, 00:16:21.721 "num_base_bdevs": 4, 00:16:21.721 "num_base_bdevs_discovered": 3, 00:16:21.721 "num_base_bdevs_operational": 3, 00:16:21.721 "base_bdevs_list": [ 00:16:21.721 { 00:16:21.721 "name": null, 00:16:21.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.721 "is_configured": false, 00:16:21.721 "data_offset": 0, 00:16:21.721 "data_size": 65536 00:16:21.721 }, 00:16:21.721 { 00:16:21.721 "name": "BaseBdev2", 00:16:21.721 "uuid": "a8a414f8-a3dd-4a81-a8fe-71e294d07715", 00:16:21.721 "is_configured": true, 00:16:21.721 "data_offset": 0, 00:16:21.721 "data_size": 65536 00:16:21.721 }, 00:16:21.721 { 00:16:21.721 "name": "BaseBdev3", 00:16:21.721 "uuid": "9edf4ae0-a528-47dd-bfa3-b7535ef89b7d", 00:16:21.721 "is_configured": true, 00:16:21.721 "data_offset": 0, 00:16:21.721 "data_size": 65536 00:16:21.721 }, 00:16:21.721 { 00:16:21.721 "name": "BaseBdev4", 00:16:21.721 "uuid": "170b5d62-5e73-490d-9046-06421ee55cb6", 00:16:21.721 "is_configured": true, 00:16:21.721 "data_offset": 0, 00:16:21.721 "data_size": 65536 00:16:21.721 } 00:16:21.721 ] 00:16:21.721 }' 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.721 13:32:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.291 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.292 [2024-11-18 13:32:52.097994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.292 [2024-11-18 13:32:52.098139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.292 [2024-11-18 13:32:52.187783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.292 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.292 [2024-11-18 13:32:52.251674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.552 [2024-11-18 13:32:52.403911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:22.552 [2024-11-18 13:32:52.404003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.552 BaseBdev2 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.552 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 [ 00:16:22.814 { 00:16:22.814 "name": "BaseBdev2", 00:16:22.814 "aliases": [ 00:16:22.814 "3d22b223-1cf1-4db8-ac73-26d794974f6b" 00:16:22.814 ], 00:16:22.814 "product_name": "Malloc disk", 00:16:22.814 "block_size": 512, 00:16:22.814 "num_blocks": 65536, 00:16:22.814 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:22.814 "assigned_rate_limits": { 00:16:22.814 "rw_ios_per_sec": 0, 00:16:22.814 "rw_mbytes_per_sec": 0, 00:16:22.814 "r_mbytes_per_sec": 0, 00:16:22.814 "w_mbytes_per_sec": 0 00:16:22.814 }, 00:16:22.814 "claimed": false, 00:16:22.814 "zoned": false, 00:16:22.814 "supported_io_types": { 00:16:22.814 "read": true, 00:16:22.814 "write": true, 00:16:22.814 "unmap": true, 00:16:22.814 "flush": true, 00:16:22.814 "reset": true, 00:16:22.814 "nvme_admin": false, 00:16:22.814 "nvme_io": false, 00:16:22.814 "nvme_io_md": false, 00:16:22.814 "write_zeroes": true, 00:16:22.814 "zcopy": true, 00:16:22.814 "get_zone_info": false, 00:16:22.814 "zone_management": false, 00:16:22.814 "zone_append": false, 00:16:22.814 "compare": false, 00:16:22.814 "compare_and_write": false, 00:16:22.814 "abort": true, 00:16:22.814 "seek_hole": false, 00:16:22.814 "seek_data": false, 00:16:22.814 "copy": true, 00:16:22.814 "nvme_iov_md": false 00:16:22.814 }, 00:16:22.814 "memory_domains": [ 00:16:22.814 { 00:16:22.814 "dma_device_id": "system", 00:16:22.814 "dma_device_type": 1 00:16:22.814 }, 00:16:22.814 { 00:16:22.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.814 "dma_device_type": 2 00:16:22.814 } 00:16:22.814 ], 00:16:22.814 "driver_specific": {} 00:16:22.814 } 00:16:22.814 ] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 BaseBdev3 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 [ 00:16:22.814 { 00:16:22.814 "name": "BaseBdev3", 00:16:22.814 "aliases": [ 00:16:22.814 "8dce67e3-4c28-4b08-a304-90708084d96a" 00:16:22.814 ], 00:16:22.814 "product_name": "Malloc disk", 00:16:22.814 "block_size": 512, 00:16:22.814 "num_blocks": 65536, 00:16:22.814 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:22.814 "assigned_rate_limits": { 00:16:22.814 "rw_ios_per_sec": 0, 00:16:22.814 "rw_mbytes_per_sec": 0, 00:16:22.814 "r_mbytes_per_sec": 0, 00:16:22.814 "w_mbytes_per_sec": 0 00:16:22.814 }, 00:16:22.814 "claimed": false, 00:16:22.814 "zoned": false, 00:16:22.814 "supported_io_types": { 00:16:22.814 "read": true, 00:16:22.814 "write": true, 00:16:22.814 "unmap": true, 00:16:22.814 "flush": true, 00:16:22.814 "reset": true, 00:16:22.814 "nvme_admin": false, 00:16:22.814 "nvme_io": false, 00:16:22.814 "nvme_io_md": false, 00:16:22.814 "write_zeroes": true, 00:16:22.814 "zcopy": true, 00:16:22.814 "get_zone_info": false, 00:16:22.814 "zone_management": false, 00:16:22.814 "zone_append": false, 00:16:22.814 "compare": false, 00:16:22.814 "compare_and_write": false, 00:16:22.814 "abort": true, 00:16:22.814 "seek_hole": false, 00:16:22.814 "seek_data": false, 00:16:22.814 "copy": true, 00:16:22.814 "nvme_iov_md": false 00:16:22.814 }, 00:16:22.814 "memory_domains": [ 00:16:22.814 { 00:16:22.814 "dma_device_id": "system", 00:16:22.814 "dma_device_type": 1 00:16:22.814 }, 00:16:22.814 { 00:16:22.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.814 "dma_device_type": 2 00:16:22.814 } 00:16:22.814 ], 00:16:22.814 "driver_specific": {} 00:16:22.814 } 00:16:22.814 ] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.814 BaseBdev4 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:22.814 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 [ 00:16:22.815 { 00:16:22.815 "name": "BaseBdev4", 00:16:22.815 "aliases": [ 00:16:22.815 "a07990da-48c1-40bc-a468-cdb7937a429a" 00:16:22.815 ], 00:16:22.815 "product_name": "Malloc disk", 00:16:22.815 "block_size": 512, 00:16:22.815 "num_blocks": 65536, 00:16:22.815 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:22.815 "assigned_rate_limits": { 00:16:22.815 "rw_ios_per_sec": 0, 00:16:22.815 "rw_mbytes_per_sec": 0, 00:16:22.815 "r_mbytes_per_sec": 0, 00:16:22.815 "w_mbytes_per_sec": 0 00:16:22.815 }, 00:16:22.815 "claimed": false, 00:16:22.815 "zoned": false, 00:16:22.815 "supported_io_types": { 00:16:22.815 "read": true, 00:16:22.815 "write": true, 00:16:22.815 "unmap": true, 00:16:22.815 "flush": true, 00:16:22.815 "reset": true, 00:16:22.815 "nvme_admin": false, 00:16:22.815 "nvme_io": false, 00:16:22.815 "nvme_io_md": false, 00:16:22.815 "write_zeroes": true, 00:16:22.815 "zcopy": true, 00:16:22.815 "get_zone_info": false, 00:16:22.815 "zone_management": false, 00:16:22.815 "zone_append": false, 00:16:22.815 "compare": false, 00:16:22.815 "compare_and_write": false, 00:16:22.815 "abort": true, 00:16:22.815 "seek_hole": false, 00:16:22.815 "seek_data": false, 00:16:22.815 "copy": true, 00:16:22.815 "nvme_iov_md": false 00:16:22.815 }, 00:16:22.815 "memory_domains": [ 00:16:22.815 { 00:16:22.815 "dma_device_id": "system", 00:16:22.815 "dma_device_type": 1 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.815 "dma_device_type": 2 00:16:22.815 } 00:16:22.815 ], 00:16:22.815 "driver_specific": {} 00:16:22.815 } 00:16:22.815 ] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 [2024-11-18 13:32:52.786439] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.815 [2024-11-18 13:32:52.786534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.815 [2024-11-18 13:32:52.786574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.815 [2024-11-18 13:32:52.788363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.815 [2024-11-18 13:32:52.788452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.815 "name": "Existed_Raid", 00:16:22.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.815 "strip_size_kb": 64, 00:16:22.815 "state": "configuring", 00:16:22.815 "raid_level": "raid5f", 00:16:22.815 "superblock": false, 00:16:22.815 "num_base_bdevs": 4, 00:16:22.815 "num_base_bdevs_discovered": 3, 00:16:22.815 "num_base_bdevs_operational": 4, 00:16:22.815 "base_bdevs_list": [ 00:16:22.815 { 00:16:22.815 "name": "BaseBdev1", 00:16:22.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.815 "is_configured": false, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 0 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "name": "BaseBdev2", 00:16:22.815 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "name": "BaseBdev3", 00:16:22.815 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 }, 00:16:22.815 { 00:16:22.815 "name": "BaseBdev4", 00:16:22.815 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:22.815 "is_configured": true, 00:16:22.815 "data_offset": 0, 00:16:22.815 "data_size": 65536 00:16:22.815 } 00:16:22.815 ] 00:16:22.815 }' 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.815 13:32:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 [2024-11-18 13:32:53.229635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.386 "name": "Existed_Raid", 00:16:23.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.386 "strip_size_kb": 64, 00:16:23.386 "state": "configuring", 00:16:23.386 "raid_level": "raid5f", 00:16:23.386 "superblock": false, 00:16:23.386 "num_base_bdevs": 4, 00:16:23.386 "num_base_bdevs_discovered": 2, 00:16:23.386 "num_base_bdevs_operational": 4, 00:16:23.386 "base_bdevs_list": [ 00:16:23.386 { 00:16:23.386 "name": "BaseBdev1", 00:16:23.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.386 "is_configured": false, 00:16:23.386 "data_offset": 0, 00:16:23.386 "data_size": 0 00:16:23.386 }, 00:16:23.386 { 00:16:23.386 "name": null, 00:16:23.386 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:23.386 "is_configured": false, 00:16:23.386 "data_offset": 0, 00:16:23.386 "data_size": 65536 00:16:23.386 }, 00:16:23.386 { 00:16:23.386 "name": "BaseBdev3", 00:16:23.386 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:23.386 "is_configured": true, 00:16:23.386 "data_offset": 0, 00:16:23.386 "data_size": 65536 00:16:23.386 }, 00:16:23.386 { 00:16:23.386 "name": "BaseBdev4", 00:16:23.386 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:23.386 "is_configured": true, 00:16:23.386 "data_offset": 0, 00:16:23.386 "data_size": 65536 00:16:23.386 } 00:16:23.386 ] 00:16:23.386 }' 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.386 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.646 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.907 [2024-11-18 13:32:53.700158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.907 BaseBdev1 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.907 [ 00:16:23.907 { 00:16:23.907 "name": "BaseBdev1", 00:16:23.907 "aliases": [ 00:16:23.907 "dc350fe9-d46e-4c0c-bd6f-7a221991a893" 00:16:23.907 ], 00:16:23.907 "product_name": "Malloc disk", 00:16:23.907 "block_size": 512, 00:16:23.907 "num_blocks": 65536, 00:16:23.907 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:23.907 "assigned_rate_limits": { 00:16:23.907 "rw_ios_per_sec": 0, 00:16:23.907 "rw_mbytes_per_sec": 0, 00:16:23.907 "r_mbytes_per_sec": 0, 00:16:23.907 "w_mbytes_per_sec": 0 00:16:23.907 }, 00:16:23.907 "claimed": true, 00:16:23.907 "claim_type": "exclusive_write", 00:16:23.907 "zoned": false, 00:16:23.907 "supported_io_types": { 00:16:23.907 "read": true, 00:16:23.907 "write": true, 00:16:23.907 "unmap": true, 00:16:23.907 "flush": true, 00:16:23.907 "reset": true, 00:16:23.907 "nvme_admin": false, 00:16:23.907 "nvme_io": false, 00:16:23.907 "nvme_io_md": false, 00:16:23.907 "write_zeroes": true, 00:16:23.907 "zcopy": true, 00:16:23.907 "get_zone_info": false, 00:16:23.907 "zone_management": false, 00:16:23.907 "zone_append": false, 00:16:23.907 "compare": false, 00:16:23.907 "compare_and_write": false, 00:16:23.907 "abort": true, 00:16:23.907 "seek_hole": false, 00:16:23.907 "seek_data": false, 00:16:23.907 "copy": true, 00:16:23.907 "nvme_iov_md": false 00:16:23.907 }, 00:16:23.907 "memory_domains": [ 00:16:23.907 { 00:16:23.907 "dma_device_id": "system", 00:16:23.907 "dma_device_type": 1 00:16:23.907 }, 00:16:23.907 { 00:16:23.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.907 "dma_device_type": 2 00:16:23.907 } 00:16:23.907 ], 00:16:23.907 "driver_specific": {} 00:16:23.907 } 00:16:23.907 ] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.907 "name": "Existed_Raid", 00:16:23.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.907 "strip_size_kb": 64, 00:16:23.907 "state": "configuring", 00:16:23.907 "raid_level": "raid5f", 00:16:23.907 "superblock": false, 00:16:23.907 "num_base_bdevs": 4, 00:16:23.907 "num_base_bdevs_discovered": 3, 00:16:23.907 "num_base_bdevs_operational": 4, 00:16:23.907 "base_bdevs_list": [ 00:16:23.907 { 00:16:23.907 "name": "BaseBdev1", 00:16:23.907 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:23.907 "is_configured": true, 00:16:23.907 "data_offset": 0, 00:16:23.907 "data_size": 65536 00:16:23.907 }, 00:16:23.907 { 00:16:23.907 "name": null, 00:16:23.907 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:23.907 "is_configured": false, 00:16:23.907 "data_offset": 0, 00:16:23.907 "data_size": 65536 00:16:23.907 }, 00:16:23.907 { 00:16:23.907 "name": "BaseBdev3", 00:16:23.907 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:23.907 "is_configured": true, 00:16:23.907 "data_offset": 0, 00:16:23.907 "data_size": 65536 00:16:23.907 }, 00:16:23.907 { 00:16:23.907 "name": "BaseBdev4", 00:16:23.907 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:23.907 "is_configured": true, 00:16:23.907 "data_offset": 0, 00:16:23.907 "data_size": 65536 00:16:23.907 } 00:16:23.907 ] 00:16:23.907 }' 00:16:23.907 13:32:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.908 13:32:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.167 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.168 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.168 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.168 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.168 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.428 [2024-11-18 13:32:54.231262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.428 "name": "Existed_Raid", 00:16:24.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.428 "strip_size_kb": 64, 00:16:24.428 "state": "configuring", 00:16:24.428 "raid_level": "raid5f", 00:16:24.428 "superblock": false, 00:16:24.428 "num_base_bdevs": 4, 00:16:24.428 "num_base_bdevs_discovered": 2, 00:16:24.428 "num_base_bdevs_operational": 4, 00:16:24.428 "base_bdevs_list": [ 00:16:24.428 { 00:16:24.428 "name": "BaseBdev1", 00:16:24.428 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:24.428 "is_configured": true, 00:16:24.428 "data_offset": 0, 00:16:24.428 "data_size": 65536 00:16:24.428 }, 00:16:24.428 { 00:16:24.428 "name": null, 00:16:24.428 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:24.428 "is_configured": false, 00:16:24.428 "data_offset": 0, 00:16:24.428 "data_size": 65536 00:16:24.428 }, 00:16:24.428 { 00:16:24.428 "name": null, 00:16:24.428 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:24.428 "is_configured": false, 00:16:24.428 "data_offset": 0, 00:16:24.428 "data_size": 65536 00:16:24.428 }, 00:16:24.428 { 00:16:24.428 "name": "BaseBdev4", 00:16:24.428 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:24.428 "is_configured": true, 00:16:24.428 "data_offset": 0, 00:16:24.428 "data_size": 65536 00:16:24.428 } 00:16:24.428 ] 00:16:24.428 }' 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.428 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.688 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.688 [2024-11-18 13:32:54.738472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.946 "name": "Existed_Raid", 00:16:24.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.946 "strip_size_kb": 64, 00:16:24.946 "state": "configuring", 00:16:24.946 "raid_level": "raid5f", 00:16:24.946 "superblock": false, 00:16:24.946 "num_base_bdevs": 4, 00:16:24.946 "num_base_bdevs_discovered": 3, 00:16:24.946 "num_base_bdevs_operational": 4, 00:16:24.946 "base_bdevs_list": [ 00:16:24.946 { 00:16:24.946 "name": "BaseBdev1", 00:16:24.946 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:24.946 "is_configured": true, 00:16:24.946 "data_offset": 0, 00:16:24.946 "data_size": 65536 00:16:24.946 }, 00:16:24.946 { 00:16:24.946 "name": null, 00:16:24.946 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:24.946 "is_configured": false, 00:16:24.946 "data_offset": 0, 00:16:24.946 "data_size": 65536 00:16:24.946 }, 00:16:24.946 { 00:16:24.946 "name": "BaseBdev3", 00:16:24.946 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:24.946 "is_configured": true, 00:16:24.946 "data_offset": 0, 00:16:24.946 "data_size": 65536 00:16:24.946 }, 00:16:24.946 { 00:16:24.946 "name": "BaseBdev4", 00:16:24.946 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:24.946 "is_configured": true, 00:16:24.946 "data_offset": 0, 00:16:24.946 "data_size": 65536 00:16:24.946 } 00:16:24.946 ] 00:16:24.946 }' 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.946 13:32:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.206 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.206 [2024-11-18 13:32:55.233624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.466 "name": "Existed_Raid", 00:16:25.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.466 "strip_size_kb": 64, 00:16:25.466 "state": "configuring", 00:16:25.466 "raid_level": "raid5f", 00:16:25.466 "superblock": false, 00:16:25.466 "num_base_bdevs": 4, 00:16:25.466 "num_base_bdevs_discovered": 2, 00:16:25.466 "num_base_bdevs_operational": 4, 00:16:25.466 "base_bdevs_list": [ 00:16:25.466 { 00:16:25.466 "name": null, 00:16:25.466 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:25.466 "is_configured": false, 00:16:25.466 "data_offset": 0, 00:16:25.466 "data_size": 65536 00:16:25.466 }, 00:16:25.466 { 00:16:25.466 "name": null, 00:16:25.466 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:25.466 "is_configured": false, 00:16:25.466 "data_offset": 0, 00:16:25.466 "data_size": 65536 00:16:25.466 }, 00:16:25.466 { 00:16:25.466 "name": "BaseBdev3", 00:16:25.466 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:25.466 "is_configured": true, 00:16:25.466 "data_offset": 0, 00:16:25.466 "data_size": 65536 00:16:25.466 }, 00:16:25.466 { 00:16:25.466 "name": "BaseBdev4", 00:16:25.466 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:25.466 "is_configured": true, 00:16:25.466 "data_offset": 0, 00:16:25.466 "data_size": 65536 00:16:25.466 } 00:16:25.466 ] 00:16:25.466 }' 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.466 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 [2024-11-18 13:32:55.854450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.036 "name": "Existed_Raid", 00:16:26.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.036 "strip_size_kb": 64, 00:16:26.036 "state": "configuring", 00:16:26.036 "raid_level": "raid5f", 00:16:26.036 "superblock": false, 00:16:26.036 "num_base_bdevs": 4, 00:16:26.036 "num_base_bdevs_discovered": 3, 00:16:26.036 "num_base_bdevs_operational": 4, 00:16:26.036 "base_bdevs_list": [ 00:16:26.036 { 00:16:26.036 "name": null, 00:16:26.036 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:26.036 "is_configured": false, 00:16:26.036 "data_offset": 0, 00:16:26.036 "data_size": 65536 00:16:26.036 }, 00:16:26.036 { 00:16:26.036 "name": "BaseBdev2", 00:16:26.036 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:26.036 "is_configured": true, 00:16:26.036 "data_offset": 0, 00:16:26.036 "data_size": 65536 00:16:26.036 }, 00:16:26.036 { 00:16:26.036 "name": "BaseBdev3", 00:16:26.036 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:26.036 "is_configured": true, 00:16:26.036 "data_offset": 0, 00:16:26.036 "data_size": 65536 00:16:26.036 }, 00:16:26.036 { 00:16:26.036 "name": "BaseBdev4", 00:16:26.036 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:26.036 "is_configured": true, 00:16:26.036 "data_offset": 0, 00:16:26.036 "data_size": 65536 00:16:26.036 } 00:16:26.036 ] 00:16:26.036 }' 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.036 13:32:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.296 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.296 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.296 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.296 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.296 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dc350fe9-d46e-4c0c-bd6f-7a221991a893 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.557 [2024-11-18 13:32:56.445395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:26.557 [2024-11-18 13:32:56.445491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:26.557 [2024-11-18 13:32:56.445517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:26.557 [2024-11-18 13:32:56.445796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:26.557 [2024-11-18 13:32:56.452860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:26.557 [2024-11-18 13:32:56.452915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:26.557 [2024-11-18 13:32:56.453203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.557 NewBaseBdev 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.557 [ 00:16:26.557 { 00:16:26.557 "name": "NewBaseBdev", 00:16:26.557 "aliases": [ 00:16:26.557 "dc350fe9-d46e-4c0c-bd6f-7a221991a893" 00:16:26.557 ], 00:16:26.557 "product_name": "Malloc disk", 00:16:26.557 "block_size": 512, 00:16:26.557 "num_blocks": 65536, 00:16:26.557 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:26.557 "assigned_rate_limits": { 00:16:26.557 "rw_ios_per_sec": 0, 00:16:26.557 "rw_mbytes_per_sec": 0, 00:16:26.557 "r_mbytes_per_sec": 0, 00:16:26.557 "w_mbytes_per_sec": 0 00:16:26.557 }, 00:16:26.557 "claimed": true, 00:16:26.557 "claim_type": "exclusive_write", 00:16:26.557 "zoned": false, 00:16:26.557 "supported_io_types": { 00:16:26.557 "read": true, 00:16:26.557 "write": true, 00:16:26.557 "unmap": true, 00:16:26.557 "flush": true, 00:16:26.557 "reset": true, 00:16:26.557 "nvme_admin": false, 00:16:26.557 "nvme_io": false, 00:16:26.557 "nvme_io_md": false, 00:16:26.557 "write_zeroes": true, 00:16:26.557 "zcopy": true, 00:16:26.557 "get_zone_info": false, 00:16:26.557 "zone_management": false, 00:16:26.557 "zone_append": false, 00:16:26.557 "compare": false, 00:16:26.557 "compare_and_write": false, 00:16:26.557 "abort": true, 00:16:26.557 "seek_hole": false, 00:16:26.557 "seek_data": false, 00:16:26.557 "copy": true, 00:16:26.557 "nvme_iov_md": false 00:16:26.557 }, 00:16:26.557 "memory_domains": [ 00:16:26.557 { 00:16:26.557 "dma_device_id": "system", 00:16:26.557 "dma_device_type": 1 00:16:26.557 }, 00:16:26.557 { 00:16:26.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.557 "dma_device_type": 2 00:16:26.557 } 00:16:26.557 ], 00:16:26.557 "driver_specific": {} 00:16:26.557 } 00:16:26.557 ] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.557 "name": "Existed_Raid", 00:16:26.557 "uuid": "1513bd07-9beb-4ba1-adda-be0685bf3f68", 00:16:26.557 "strip_size_kb": 64, 00:16:26.557 "state": "online", 00:16:26.557 "raid_level": "raid5f", 00:16:26.557 "superblock": false, 00:16:26.557 "num_base_bdevs": 4, 00:16:26.557 "num_base_bdevs_discovered": 4, 00:16:26.557 "num_base_bdevs_operational": 4, 00:16:26.557 "base_bdevs_list": [ 00:16:26.557 { 00:16:26.557 "name": "NewBaseBdev", 00:16:26.557 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:26.557 "is_configured": true, 00:16:26.557 "data_offset": 0, 00:16:26.557 "data_size": 65536 00:16:26.557 }, 00:16:26.557 { 00:16:26.557 "name": "BaseBdev2", 00:16:26.557 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:26.557 "is_configured": true, 00:16:26.557 "data_offset": 0, 00:16:26.557 "data_size": 65536 00:16:26.557 }, 00:16:26.557 { 00:16:26.557 "name": "BaseBdev3", 00:16:26.557 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:26.557 "is_configured": true, 00:16:26.557 "data_offset": 0, 00:16:26.557 "data_size": 65536 00:16:26.557 }, 00:16:26.557 { 00:16:26.557 "name": "BaseBdev4", 00:16:26.557 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:26.557 "is_configured": true, 00:16:26.557 "data_offset": 0, 00:16:26.557 "data_size": 65536 00:16:26.557 } 00:16:26.557 ] 00:16:26.557 }' 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.557 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.127 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.127 [2024-11-18 13:32:56.944672] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.128 13:32:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.128 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.128 "name": "Existed_Raid", 00:16:27.128 "aliases": [ 00:16:27.128 "1513bd07-9beb-4ba1-adda-be0685bf3f68" 00:16:27.128 ], 00:16:27.128 "product_name": "Raid Volume", 00:16:27.128 "block_size": 512, 00:16:27.128 "num_blocks": 196608, 00:16:27.128 "uuid": "1513bd07-9beb-4ba1-adda-be0685bf3f68", 00:16:27.128 "assigned_rate_limits": { 00:16:27.128 "rw_ios_per_sec": 0, 00:16:27.128 "rw_mbytes_per_sec": 0, 00:16:27.128 "r_mbytes_per_sec": 0, 00:16:27.128 "w_mbytes_per_sec": 0 00:16:27.128 }, 00:16:27.128 "claimed": false, 00:16:27.128 "zoned": false, 00:16:27.128 "supported_io_types": { 00:16:27.128 "read": true, 00:16:27.128 "write": true, 00:16:27.128 "unmap": false, 00:16:27.128 "flush": false, 00:16:27.128 "reset": true, 00:16:27.128 "nvme_admin": false, 00:16:27.128 "nvme_io": false, 00:16:27.128 "nvme_io_md": false, 00:16:27.128 "write_zeroes": true, 00:16:27.128 "zcopy": false, 00:16:27.128 "get_zone_info": false, 00:16:27.128 "zone_management": false, 00:16:27.128 "zone_append": false, 00:16:27.128 "compare": false, 00:16:27.128 "compare_and_write": false, 00:16:27.128 "abort": false, 00:16:27.128 "seek_hole": false, 00:16:27.128 "seek_data": false, 00:16:27.128 "copy": false, 00:16:27.128 "nvme_iov_md": false 00:16:27.128 }, 00:16:27.128 "driver_specific": { 00:16:27.128 "raid": { 00:16:27.128 "uuid": "1513bd07-9beb-4ba1-adda-be0685bf3f68", 00:16:27.128 "strip_size_kb": 64, 00:16:27.128 "state": "online", 00:16:27.128 "raid_level": "raid5f", 00:16:27.128 "superblock": false, 00:16:27.128 "num_base_bdevs": 4, 00:16:27.128 "num_base_bdevs_discovered": 4, 00:16:27.128 "num_base_bdevs_operational": 4, 00:16:27.128 "base_bdevs_list": [ 00:16:27.128 { 00:16:27.128 "name": "NewBaseBdev", 00:16:27.128 "uuid": "dc350fe9-d46e-4c0c-bd6f-7a221991a893", 00:16:27.128 "is_configured": true, 00:16:27.128 "data_offset": 0, 00:16:27.128 "data_size": 65536 00:16:27.128 }, 00:16:27.128 { 00:16:27.128 "name": "BaseBdev2", 00:16:27.128 "uuid": "3d22b223-1cf1-4db8-ac73-26d794974f6b", 00:16:27.128 "is_configured": true, 00:16:27.128 "data_offset": 0, 00:16:27.128 "data_size": 65536 00:16:27.128 }, 00:16:27.128 { 00:16:27.128 "name": "BaseBdev3", 00:16:27.128 "uuid": "8dce67e3-4c28-4b08-a304-90708084d96a", 00:16:27.128 "is_configured": true, 00:16:27.128 "data_offset": 0, 00:16:27.128 "data_size": 65536 00:16:27.128 }, 00:16:27.128 { 00:16:27.128 "name": "BaseBdev4", 00:16:27.128 "uuid": "a07990da-48c1-40bc-a468-cdb7937a429a", 00:16:27.128 "is_configured": true, 00:16:27.128 "data_offset": 0, 00:16:27.128 "data_size": 65536 00:16:27.128 } 00:16:27.128 ] 00:16:27.128 } 00:16:27.128 } 00:16:27.128 }' 00:16:27.128 13:32:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:27.128 BaseBdev2 00:16:27.128 BaseBdev3 00:16:27.128 BaseBdev4' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.128 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.388 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.389 [2024-11-18 13:32:57.251953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.389 [2024-11-18 13:32:57.251978] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.389 [2024-11-18 13:32:57.252043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.389 [2024-11-18 13:32:57.252342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.389 [2024-11-18 13:32:57.252354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82714 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82714 ']' 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82714 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82714 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82714' 00:16:27.389 killing process with pid 82714 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82714 00:16:27.389 [2024-11-18 13:32:57.301475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.389 13:32:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82714 00:16:27.648 [2024-11-18 13:32:57.670575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.031 00:16:29.031 real 0m11.368s 00:16:29.031 user 0m18.007s 00:16:29.031 sys 0m2.210s 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.031 ************************************ 00:16:29.031 END TEST raid5f_state_function_test 00:16:29.031 ************************************ 00:16:29.031 13:32:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:29.031 13:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:29.031 13:32:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.031 13:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.031 ************************************ 00:16:29.031 START TEST raid5f_state_function_test_sb 00:16:29.031 ************************************ 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:29.031 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83385 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83385' 00:16:29.032 Process raid pid: 83385 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83385 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83385 ']' 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.032 13:32:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.032 [2024-11-18 13:32:58.901729] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:29.032 [2024-11-18 13:32:58.901891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.032 [2024-11-18 13:32:59.077251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.292 [2024-11-18 13:32:59.183732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.552 [2024-11-18 13:32:59.377668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.552 [2024-11-18 13:32:59.377701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.813 [2024-11-18 13:32:59.724273] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.813 [2024-11-18 13:32:59.724320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.813 [2024-11-18 13:32:59.724335] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.813 [2024-11-18 13:32:59.724361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.813 [2024-11-18 13:32:59.724367] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.813 [2024-11-18 13:32:59.724375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.813 [2024-11-18 13:32:59.724382] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:29.813 [2024-11-18 13:32:59.724390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.813 "name": "Existed_Raid", 00:16:29.813 "uuid": "9a79cb73-571d-4e7d-bdef-4c51dfab943b", 00:16:29.813 "strip_size_kb": 64, 00:16:29.813 "state": "configuring", 00:16:29.813 "raid_level": "raid5f", 00:16:29.813 "superblock": true, 00:16:29.813 "num_base_bdevs": 4, 00:16:29.813 "num_base_bdevs_discovered": 0, 00:16:29.813 "num_base_bdevs_operational": 4, 00:16:29.813 "base_bdevs_list": [ 00:16:29.813 { 00:16:29.813 "name": "BaseBdev1", 00:16:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.813 "is_configured": false, 00:16:29.813 "data_offset": 0, 00:16:29.813 "data_size": 0 00:16:29.813 }, 00:16:29.813 { 00:16:29.813 "name": "BaseBdev2", 00:16:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.813 "is_configured": false, 00:16:29.813 "data_offset": 0, 00:16:29.813 "data_size": 0 00:16:29.813 }, 00:16:29.813 { 00:16:29.813 "name": "BaseBdev3", 00:16:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.813 "is_configured": false, 00:16:29.813 "data_offset": 0, 00:16:29.813 "data_size": 0 00:16:29.813 }, 00:16:29.813 { 00:16:29.813 "name": "BaseBdev4", 00:16:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.813 "is_configured": false, 00:16:29.813 "data_offset": 0, 00:16:29.813 "data_size": 0 00:16:29.813 } 00:16:29.813 ] 00:16:29.813 }' 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.813 13:32:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 [2024-11-18 13:33:00.155417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.384 [2024-11-18 13:33:00.155504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 [2024-11-18 13:33:00.167412] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.384 [2024-11-18 13:33:00.167490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.384 [2024-11-18 13:33:00.167516] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.384 [2024-11-18 13:33:00.167539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.384 [2024-11-18 13:33:00.167556] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.384 [2024-11-18 13:33:00.167577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.384 [2024-11-18 13:33:00.167594] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.384 [2024-11-18 13:33:00.167614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 [2024-11-18 13:33:00.215236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.384 BaseBdev1 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 [ 00:16:30.384 { 00:16:30.384 "name": "BaseBdev1", 00:16:30.384 "aliases": [ 00:16:30.384 "779164a8-2b27-4a0a-85b6-78d94b0a01e0" 00:16:30.384 ], 00:16:30.384 "product_name": "Malloc disk", 00:16:30.384 "block_size": 512, 00:16:30.384 "num_blocks": 65536, 00:16:30.384 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:30.384 "assigned_rate_limits": { 00:16:30.384 "rw_ios_per_sec": 0, 00:16:30.384 "rw_mbytes_per_sec": 0, 00:16:30.384 "r_mbytes_per_sec": 0, 00:16:30.384 "w_mbytes_per_sec": 0 00:16:30.384 }, 00:16:30.384 "claimed": true, 00:16:30.384 "claim_type": "exclusive_write", 00:16:30.384 "zoned": false, 00:16:30.384 "supported_io_types": { 00:16:30.384 "read": true, 00:16:30.384 "write": true, 00:16:30.384 "unmap": true, 00:16:30.384 "flush": true, 00:16:30.384 "reset": true, 00:16:30.384 "nvme_admin": false, 00:16:30.384 "nvme_io": false, 00:16:30.384 "nvme_io_md": false, 00:16:30.384 "write_zeroes": true, 00:16:30.384 "zcopy": true, 00:16:30.384 "get_zone_info": false, 00:16:30.384 "zone_management": false, 00:16:30.384 "zone_append": false, 00:16:30.384 "compare": false, 00:16:30.384 "compare_and_write": false, 00:16:30.384 "abort": true, 00:16:30.384 "seek_hole": false, 00:16:30.384 "seek_data": false, 00:16:30.384 "copy": true, 00:16:30.384 "nvme_iov_md": false 00:16:30.384 }, 00:16:30.384 "memory_domains": [ 00:16:30.384 { 00:16:30.384 "dma_device_id": "system", 00:16:30.384 "dma_device_type": 1 00:16:30.384 }, 00:16:30.384 { 00:16:30.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.384 "dma_device_type": 2 00:16:30.384 } 00:16:30.384 ], 00:16:30.384 "driver_specific": {} 00:16:30.384 } 00:16:30.384 ] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.384 "name": "Existed_Raid", 00:16:30.384 "uuid": "03c9a697-18a4-42ef-bcad-c5734a00a110", 00:16:30.384 "strip_size_kb": 64, 00:16:30.384 "state": "configuring", 00:16:30.384 "raid_level": "raid5f", 00:16:30.384 "superblock": true, 00:16:30.384 "num_base_bdevs": 4, 00:16:30.384 "num_base_bdevs_discovered": 1, 00:16:30.384 "num_base_bdevs_operational": 4, 00:16:30.384 "base_bdevs_list": [ 00:16:30.384 { 00:16:30.384 "name": "BaseBdev1", 00:16:30.384 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:30.384 "is_configured": true, 00:16:30.384 "data_offset": 2048, 00:16:30.384 "data_size": 63488 00:16:30.384 }, 00:16:30.384 { 00:16:30.384 "name": "BaseBdev2", 00:16:30.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.384 "is_configured": false, 00:16:30.384 "data_offset": 0, 00:16:30.384 "data_size": 0 00:16:30.384 }, 00:16:30.384 { 00:16:30.384 "name": "BaseBdev3", 00:16:30.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.384 "is_configured": false, 00:16:30.384 "data_offset": 0, 00:16:30.384 "data_size": 0 00:16:30.384 }, 00:16:30.384 { 00:16:30.384 "name": "BaseBdev4", 00:16:30.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.384 "is_configured": false, 00:16:30.384 "data_offset": 0, 00:16:30.384 "data_size": 0 00:16:30.384 } 00:16:30.384 ] 00:16:30.384 }' 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.384 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.953 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.953 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.953 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.954 [2024-11-18 13:33:00.730342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.954 [2024-11-18 13:33:00.730392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.954 [2024-11-18 13:33:00.738392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.954 [2024-11-18 13:33:00.740152] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.954 [2024-11-18 13:33:00.740234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.954 [2024-11-18 13:33:00.740278] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.954 [2024-11-18 13:33:00.740302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.954 [2024-11-18 13:33:00.740320] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:30.954 [2024-11-18 13:33:00.740340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.954 "name": "Existed_Raid", 00:16:30.954 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:30.954 "strip_size_kb": 64, 00:16:30.954 "state": "configuring", 00:16:30.954 "raid_level": "raid5f", 00:16:30.954 "superblock": true, 00:16:30.954 "num_base_bdevs": 4, 00:16:30.954 "num_base_bdevs_discovered": 1, 00:16:30.954 "num_base_bdevs_operational": 4, 00:16:30.954 "base_bdevs_list": [ 00:16:30.954 { 00:16:30.954 "name": "BaseBdev1", 00:16:30.954 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:30.954 "is_configured": true, 00:16:30.954 "data_offset": 2048, 00:16:30.954 "data_size": 63488 00:16:30.954 }, 00:16:30.954 { 00:16:30.954 "name": "BaseBdev2", 00:16:30.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.954 "is_configured": false, 00:16:30.954 "data_offset": 0, 00:16:30.954 "data_size": 0 00:16:30.954 }, 00:16:30.954 { 00:16:30.954 "name": "BaseBdev3", 00:16:30.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.954 "is_configured": false, 00:16:30.954 "data_offset": 0, 00:16:30.954 "data_size": 0 00:16:30.954 }, 00:16:30.954 { 00:16:30.954 "name": "BaseBdev4", 00:16:30.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.954 "is_configured": false, 00:16:30.954 "data_offset": 0, 00:16:30.954 "data_size": 0 00:16:30.954 } 00:16:30.954 ] 00:16:30.954 }' 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.954 13:33:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 [2024-11-18 13:33:01.207538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.213 BaseBdev2 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.213 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.214 [ 00:16:31.214 { 00:16:31.214 "name": "BaseBdev2", 00:16:31.214 "aliases": [ 00:16:31.214 "2ec1a4d5-2f42-49b7-bfba-3f729345cab7" 00:16:31.214 ], 00:16:31.214 "product_name": "Malloc disk", 00:16:31.214 "block_size": 512, 00:16:31.214 "num_blocks": 65536, 00:16:31.214 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:31.214 "assigned_rate_limits": { 00:16:31.214 "rw_ios_per_sec": 0, 00:16:31.214 "rw_mbytes_per_sec": 0, 00:16:31.214 "r_mbytes_per_sec": 0, 00:16:31.214 "w_mbytes_per_sec": 0 00:16:31.214 }, 00:16:31.214 "claimed": true, 00:16:31.214 "claim_type": "exclusive_write", 00:16:31.214 "zoned": false, 00:16:31.214 "supported_io_types": { 00:16:31.214 "read": true, 00:16:31.214 "write": true, 00:16:31.214 "unmap": true, 00:16:31.214 "flush": true, 00:16:31.214 "reset": true, 00:16:31.214 "nvme_admin": false, 00:16:31.214 "nvme_io": false, 00:16:31.214 "nvme_io_md": false, 00:16:31.214 "write_zeroes": true, 00:16:31.214 "zcopy": true, 00:16:31.214 "get_zone_info": false, 00:16:31.214 "zone_management": false, 00:16:31.214 "zone_append": false, 00:16:31.214 "compare": false, 00:16:31.214 "compare_and_write": false, 00:16:31.214 "abort": true, 00:16:31.214 "seek_hole": false, 00:16:31.214 "seek_data": false, 00:16:31.214 "copy": true, 00:16:31.214 "nvme_iov_md": false 00:16:31.214 }, 00:16:31.214 "memory_domains": [ 00:16:31.214 { 00:16:31.214 "dma_device_id": "system", 00:16:31.214 "dma_device_type": 1 00:16:31.214 }, 00:16:31.214 { 00:16:31.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.214 "dma_device_type": 2 00:16:31.214 } 00:16:31.214 ], 00:16:31.214 "driver_specific": {} 00:16:31.214 } 00:16:31.214 ] 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.214 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.474 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.474 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.474 "name": "Existed_Raid", 00:16:31.474 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:31.474 "strip_size_kb": 64, 00:16:31.474 "state": "configuring", 00:16:31.474 "raid_level": "raid5f", 00:16:31.474 "superblock": true, 00:16:31.474 "num_base_bdevs": 4, 00:16:31.474 "num_base_bdevs_discovered": 2, 00:16:31.474 "num_base_bdevs_operational": 4, 00:16:31.474 "base_bdevs_list": [ 00:16:31.474 { 00:16:31.474 "name": "BaseBdev1", 00:16:31.474 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:31.474 "is_configured": true, 00:16:31.474 "data_offset": 2048, 00:16:31.474 "data_size": 63488 00:16:31.474 }, 00:16:31.474 { 00:16:31.474 "name": "BaseBdev2", 00:16:31.474 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:31.474 "is_configured": true, 00:16:31.474 "data_offset": 2048, 00:16:31.474 "data_size": 63488 00:16:31.474 }, 00:16:31.474 { 00:16:31.474 "name": "BaseBdev3", 00:16:31.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.474 "is_configured": false, 00:16:31.474 "data_offset": 0, 00:16:31.474 "data_size": 0 00:16:31.474 }, 00:16:31.474 { 00:16:31.474 "name": "BaseBdev4", 00:16:31.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.474 "is_configured": false, 00:16:31.474 "data_offset": 0, 00:16:31.474 "data_size": 0 00:16:31.474 } 00:16:31.474 ] 00:16:31.474 }' 00:16:31.474 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.474 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 [2024-11-18 13:33:01.740959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.735 BaseBdev3 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.735 [ 00:16:31.735 { 00:16:31.735 "name": "BaseBdev3", 00:16:31.735 "aliases": [ 00:16:31.735 "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c" 00:16:31.735 ], 00:16:31.735 "product_name": "Malloc disk", 00:16:31.735 "block_size": 512, 00:16:31.735 "num_blocks": 65536, 00:16:31.735 "uuid": "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c", 00:16:31.735 "assigned_rate_limits": { 00:16:31.735 "rw_ios_per_sec": 0, 00:16:31.735 "rw_mbytes_per_sec": 0, 00:16:31.735 "r_mbytes_per_sec": 0, 00:16:31.735 "w_mbytes_per_sec": 0 00:16:31.735 }, 00:16:31.735 "claimed": true, 00:16:31.735 "claim_type": "exclusive_write", 00:16:31.735 "zoned": false, 00:16:31.735 "supported_io_types": { 00:16:31.735 "read": true, 00:16:31.735 "write": true, 00:16:31.735 "unmap": true, 00:16:31.735 "flush": true, 00:16:31.735 "reset": true, 00:16:31.735 "nvme_admin": false, 00:16:31.735 "nvme_io": false, 00:16:31.735 "nvme_io_md": false, 00:16:31.735 "write_zeroes": true, 00:16:31.735 "zcopy": true, 00:16:31.735 "get_zone_info": false, 00:16:31.735 "zone_management": false, 00:16:31.735 "zone_append": false, 00:16:31.735 "compare": false, 00:16:31.735 "compare_and_write": false, 00:16:31.735 "abort": true, 00:16:31.735 "seek_hole": false, 00:16:31.735 "seek_data": false, 00:16:31.735 "copy": true, 00:16:31.735 "nvme_iov_md": false 00:16:31.735 }, 00:16:31.735 "memory_domains": [ 00:16:31.735 { 00:16:31.735 "dma_device_id": "system", 00:16:31.735 "dma_device_type": 1 00:16:31.735 }, 00:16:31.735 { 00:16:31.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.735 "dma_device_type": 2 00:16:31.735 } 00:16:31.735 ], 00:16:31.735 "driver_specific": {} 00:16:31.735 } 00:16:31.735 ] 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.735 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.995 "name": "Existed_Raid", 00:16:31.995 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:31.995 "strip_size_kb": 64, 00:16:31.995 "state": "configuring", 00:16:31.995 "raid_level": "raid5f", 00:16:31.995 "superblock": true, 00:16:31.995 "num_base_bdevs": 4, 00:16:31.995 "num_base_bdevs_discovered": 3, 00:16:31.995 "num_base_bdevs_operational": 4, 00:16:31.995 "base_bdevs_list": [ 00:16:31.995 { 00:16:31.995 "name": "BaseBdev1", 00:16:31.995 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:31.995 "is_configured": true, 00:16:31.995 "data_offset": 2048, 00:16:31.995 "data_size": 63488 00:16:31.995 }, 00:16:31.995 { 00:16:31.995 "name": "BaseBdev2", 00:16:31.995 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:31.995 "is_configured": true, 00:16:31.995 "data_offset": 2048, 00:16:31.995 "data_size": 63488 00:16:31.995 }, 00:16:31.995 { 00:16:31.995 "name": "BaseBdev3", 00:16:31.995 "uuid": "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c", 00:16:31.995 "is_configured": true, 00:16:31.995 "data_offset": 2048, 00:16:31.995 "data_size": 63488 00:16:31.995 }, 00:16:31.995 { 00:16:31.995 "name": "BaseBdev4", 00:16:31.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.995 "is_configured": false, 00:16:31.995 "data_offset": 0, 00:16:31.995 "data_size": 0 00:16:31.995 } 00:16:31.995 ] 00:16:31.995 }' 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.995 13:33:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.256 [2024-11-18 13:33:02.247513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.256 [2024-11-18 13:33:02.247854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:32.256 [2024-11-18 13:33:02.247906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:32.256 [2024-11-18 13:33:02.248188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:32.256 BaseBdev4 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.256 [2024-11-18 13:33:02.255135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:32.256 [2024-11-18 13:33:02.255200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:32.256 [2024-11-18 13:33:02.255401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.256 [ 00:16:32.256 { 00:16:32.256 "name": "BaseBdev4", 00:16:32.256 "aliases": [ 00:16:32.256 "b94788ad-8ee9-478d-9756-422caed27cbb" 00:16:32.256 ], 00:16:32.256 "product_name": "Malloc disk", 00:16:32.256 "block_size": 512, 00:16:32.256 "num_blocks": 65536, 00:16:32.256 "uuid": "b94788ad-8ee9-478d-9756-422caed27cbb", 00:16:32.256 "assigned_rate_limits": { 00:16:32.256 "rw_ios_per_sec": 0, 00:16:32.256 "rw_mbytes_per_sec": 0, 00:16:32.256 "r_mbytes_per_sec": 0, 00:16:32.256 "w_mbytes_per_sec": 0 00:16:32.256 }, 00:16:32.256 "claimed": true, 00:16:32.256 "claim_type": "exclusive_write", 00:16:32.256 "zoned": false, 00:16:32.256 "supported_io_types": { 00:16:32.256 "read": true, 00:16:32.256 "write": true, 00:16:32.256 "unmap": true, 00:16:32.256 "flush": true, 00:16:32.256 "reset": true, 00:16:32.256 "nvme_admin": false, 00:16:32.256 "nvme_io": false, 00:16:32.256 "nvme_io_md": false, 00:16:32.256 "write_zeroes": true, 00:16:32.256 "zcopy": true, 00:16:32.256 "get_zone_info": false, 00:16:32.256 "zone_management": false, 00:16:32.256 "zone_append": false, 00:16:32.256 "compare": false, 00:16:32.256 "compare_and_write": false, 00:16:32.256 "abort": true, 00:16:32.256 "seek_hole": false, 00:16:32.256 "seek_data": false, 00:16:32.256 "copy": true, 00:16:32.256 "nvme_iov_md": false 00:16:32.256 }, 00:16:32.256 "memory_domains": [ 00:16:32.256 { 00:16:32.256 "dma_device_id": "system", 00:16:32.256 "dma_device_type": 1 00:16:32.256 }, 00:16:32.256 { 00:16:32.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.256 "dma_device_type": 2 00:16:32.256 } 00:16:32.256 ], 00:16:32.256 "driver_specific": {} 00:16:32.256 } 00:16:32.256 ] 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:32.256 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.257 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.517 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.517 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.517 "name": "Existed_Raid", 00:16:32.517 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:32.517 "strip_size_kb": 64, 00:16:32.517 "state": "online", 00:16:32.517 "raid_level": "raid5f", 00:16:32.517 "superblock": true, 00:16:32.517 "num_base_bdevs": 4, 00:16:32.517 "num_base_bdevs_discovered": 4, 00:16:32.517 "num_base_bdevs_operational": 4, 00:16:32.517 "base_bdevs_list": [ 00:16:32.517 { 00:16:32.517 "name": "BaseBdev1", 00:16:32.517 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:32.517 "is_configured": true, 00:16:32.517 "data_offset": 2048, 00:16:32.517 "data_size": 63488 00:16:32.517 }, 00:16:32.517 { 00:16:32.517 "name": "BaseBdev2", 00:16:32.517 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:32.517 "is_configured": true, 00:16:32.517 "data_offset": 2048, 00:16:32.517 "data_size": 63488 00:16:32.517 }, 00:16:32.517 { 00:16:32.517 "name": "BaseBdev3", 00:16:32.517 "uuid": "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c", 00:16:32.517 "is_configured": true, 00:16:32.517 "data_offset": 2048, 00:16:32.517 "data_size": 63488 00:16:32.517 }, 00:16:32.517 { 00:16:32.517 "name": "BaseBdev4", 00:16:32.517 "uuid": "b94788ad-8ee9-478d-9756-422caed27cbb", 00:16:32.517 "is_configured": true, 00:16:32.517 "data_offset": 2048, 00:16:32.517 "data_size": 63488 00:16:32.517 } 00:16:32.517 ] 00:16:32.517 }' 00:16:32.517 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.517 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 [2024-11-18 13:33:02.762775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.777 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.777 "name": "Existed_Raid", 00:16:32.777 "aliases": [ 00:16:32.777 "0f96d1ae-048e-4ef9-8f9f-19a226e9119c" 00:16:32.777 ], 00:16:32.777 "product_name": "Raid Volume", 00:16:32.777 "block_size": 512, 00:16:32.777 "num_blocks": 190464, 00:16:32.777 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:32.777 "assigned_rate_limits": { 00:16:32.777 "rw_ios_per_sec": 0, 00:16:32.777 "rw_mbytes_per_sec": 0, 00:16:32.777 "r_mbytes_per_sec": 0, 00:16:32.777 "w_mbytes_per_sec": 0 00:16:32.777 }, 00:16:32.777 "claimed": false, 00:16:32.777 "zoned": false, 00:16:32.777 "supported_io_types": { 00:16:32.777 "read": true, 00:16:32.777 "write": true, 00:16:32.777 "unmap": false, 00:16:32.777 "flush": false, 00:16:32.777 "reset": true, 00:16:32.777 "nvme_admin": false, 00:16:32.777 "nvme_io": false, 00:16:32.777 "nvme_io_md": false, 00:16:32.777 "write_zeroes": true, 00:16:32.777 "zcopy": false, 00:16:32.777 "get_zone_info": false, 00:16:32.777 "zone_management": false, 00:16:32.777 "zone_append": false, 00:16:32.777 "compare": false, 00:16:32.777 "compare_and_write": false, 00:16:32.777 "abort": false, 00:16:32.777 "seek_hole": false, 00:16:32.777 "seek_data": false, 00:16:32.777 "copy": false, 00:16:32.777 "nvme_iov_md": false 00:16:32.777 }, 00:16:32.777 "driver_specific": { 00:16:32.777 "raid": { 00:16:32.777 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:32.777 "strip_size_kb": 64, 00:16:32.777 "state": "online", 00:16:32.777 "raid_level": "raid5f", 00:16:32.777 "superblock": true, 00:16:32.777 "num_base_bdevs": 4, 00:16:32.777 "num_base_bdevs_discovered": 4, 00:16:32.777 "num_base_bdevs_operational": 4, 00:16:32.777 "base_bdevs_list": [ 00:16:32.777 { 00:16:32.777 "name": "BaseBdev1", 00:16:32.777 "uuid": "779164a8-2b27-4a0a-85b6-78d94b0a01e0", 00:16:32.777 "is_configured": true, 00:16:32.777 "data_offset": 2048, 00:16:32.777 "data_size": 63488 00:16:32.777 }, 00:16:32.777 { 00:16:32.778 "name": "BaseBdev2", 00:16:32.778 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:32.778 "is_configured": true, 00:16:32.778 "data_offset": 2048, 00:16:32.778 "data_size": 63488 00:16:32.778 }, 00:16:32.778 { 00:16:32.778 "name": "BaseBdev3", 00:16:32.778 "uuid": "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c", 00:16:32.778 "is_configured": true, 00:16:32.778 "data_offset": 2048, 00:16:32.778 "data_size": 63488 00:16:32.778 }, 00:16:32.778 { 00:16:32.778 "name": "BaseBdev4", 00:16:32.778 "uuid": "b94788ad-8ee9-478d-9756-422caed27cbb", 00:16:32.778 "is_configured": true, 00:16:32.778 "data_offset": 2048, 00:16:32.778 "data_size": 63488 00:16:32.778 } 00:16:32.778 ] 00:16:32.778 } 00:16:32.778 } 00:16:32.778 }' 00:16:32.778 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.778 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.778 BaseBdev2 00:16:32.778 BaseBdev3 00:16:32.778 BaseBdev4' 00:16:32.778 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.051 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 13:33:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.052 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.052 [2024-11-18 13:33:03.078122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.326 "name": "Existed_Raid", 00:16:33.326 "uuid": "0f96d1ae-048e-4ef9-8f9f-19a226e9119c", 00:16:33.326 "strip_size_kb": 64, 00:16:33.326 "state": "online", 00:16:33.326 "raid_level": "raid5f", 00:16:33.326 "superblock": true, 00:16:33.326 "num_base_bdevs": 4, 00:16:33.326 "num_base_bdevs_discovered": 3, 00:16:33.326 "num_base_bdevs_operational": 3, 00:16:33.326 "base_bdevs_list": [ 00:16:33.326 { 00:16:33.326 "name": null, 00:16:33.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.326 "is_configured": false, 00:16:33.326 "data_offset": 0, 00:16:33.326 "data_size": 63488 00:16:33.326 }, 00:16:33.326 { 00:16:33.326 "name": "BaseBdev2", 00:16:33.326 "uuid": "2ec1a4d5-2f42-49b7-bfba-3f729345cab7", 00:16:33.326 "is_configured": true, 00:16:33.326 "data_offset": 2048, 00:16:33.326 "data_size": 63488 00:16:33.326 }, 00:16:33.326 { 00:16:33.326 "name": "BaseBdev3", 00:16:33.326 "uuid": "f5763ec9-c082-4ca6-95a7-e6b0d21e4c0c", 00:16:33.326 "is_configured": true, 00:16:33.326 "data_offset": 2048, 00:16:33.326 "data_size": 63488 00:16:33.326 }, 00:16:33.326 { 00:16:33.326 "name": "BaseBdev4", 00:16:33.326 "uuid": "b94788ad-8ee9-478d-9756-422caed27cbb", 00:16:33.326 "is_configured": true, 00:16:33.326 "data_offset": 2048, 00:16:33.326 "data_size": 63488 00:16:33.326 } 00:16:33.326 ] 00:16:33.326 }' 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.326 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.587 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 [2024-11-18 13:33:03.655061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.847 [2024-11-18 13:33:03.655291] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.847 [2024-11-18 13:33:03.742591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 [2024-11-18 13:33:03.798513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.847 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.107 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.107 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:34.107 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.107 13:33:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:34.107 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 13:33:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 [2024-11-18 13:33:03.947852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:34.108 [2024-11-18 13:33:03.947959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 BaseBdev2 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 [ 00:16:34.108 { 00:16:34.108 "name": "BaseBdev2", 00:16:34.108 "aliases": [ 00:16:34.108 "4c223693-316a-4904-9a0a-20a91b970b74" 00:16:34.108 ], 00:16:34.108 "product_name": "Malloc disk", 00:16:34.108 "block_size": 512, 00:16:34.108 "num_blocks": 65536, 00:16:34.108 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:34.108 "assigned_rate_limits": { 00:16:34.108 "rw_ios_per_sec": 0, 00:16:34.108 "rw_mbytes_per_sec": 0, 00:16:34.108 "r_mbytes_per_sec": 0, 00:16:34.108 "w_mbytes_per_sec": 0 00:16:34.108 }, 00:16:34.108 "claimed": false, 00:16:34.108 "zoned": false, 00:16:34.108 "supported_io_types": { 00:16:34.108 "read": true, 00:16:34.108 "write": true, 00:16:34.108 "unmap": true, 00:16:34.108 "flush": true, 00:16:34.108 "reset": true, 00:16:34.108 "nvme_admin": false, 00:16:34.108 "nvme_io": false, 00:16:34.108 "nvme_io_md": false, 00:16:34.108 "write_zeroes": true, 00:16:34.108 "zcopy": true, 00:16:34.368 "get_zone_info": false, 00:16:34.368 "zone_management": false, 00:16:34.368 "zone_append": false, 00:16:34.368 "compare": false, 00:16:34.368 "compare_and_write": false, 00:16:34.368 "abort": true, 00:16:34.368 "seek_hole": false, 00:16:34.368 "seek_data": false, 00:16:34.368 "copy": true, 00:16:34.368 "nvme_iov_md": false 00:16:34.368 }, 00:16:34.368 "memory_domains": [ 00:16:34.368 { 00:16:34.368 "dma_device_id": "system", 00:16:34.368 "dma_device_type": 1 00:16:34.368 }, 00:16:34.368 { 00:16:34.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.368 "dma_device_type": 2 00:16:34.368 } 00:16:34.368 ], 00:16:34.368 "driver_specific": {} 00:16:34.368 } 00:16:34.368 ] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 BaseBdev3 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 [ 00:16:34.369 { 00:16:34.369 "name": "BaseBdev3", 00:16:34.369 "aliases": [ 00:16:34.369 "762d8565-6a66-46c9-8991-2bb5495610e4" 00:16:34.369 ], 00:16:34.369 "product_name": "Malloc disk", 00:16:34.369 "block_size": 512, 00:16:34.369 "num_blocks": 65536, 00:16:34.369 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:34.369 "assigned_rate_limits": { 00:16:34.369 "rw_ios_per_sec": 0, 00:16:34.369 "rw_mbytes_per_sec": 0, 00:16:34.369 "r_mbytes_per_sec": 0, 00:16:34.369 "w_mbytes_per_sec": 0 00:16:34.369 }, 00:16:34.369 "claimed": false, 00:16:34.369 "zoned": false, 00:16:34.369 "supported_io_types": { 00:16:34.369 "read": true, 00:16:34.369 "write": true, 00:16:34.369 "unmap": true, 00:16:34.369 "flush": true, 00:16:34.369 "reset": true, 00:16:34.369 "nvme_admin": false, 00:16:34.369 "nvme_io": false, 00:16:34.369 "nvme_io_md": false, 00:16:34.369 "write_zeroes": true, 00:16:34.369 "zcopy": true, 00:16:34.369 "get_zone_info": false, 00:16:34.369 "zone_management": false, 00:16:34.369 "zone_append": false, 00:16:34.369 "compare": false, 00:16:34.369 "compare_and_write": false, 00:16:34.369 "abort": true, 00:16:34.369 "seek_hole": false, 00:16:34.369 "seek_data": false, 00:16:34.369 "copy": true, 00:16:34.369 "nvme_iov_md": false 00:16:34.369 }, 00:16:34.369 "memory_domains": [ 00:16:34.369 { 00:16:34.369 "dma_device_id": "system", 00:16:34.369 "dma_device_type": 1 00:16:34.369 }, 00:16:34.369 { 00:16:34.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.369 "dma_device_type": 2 00:16:34.369 } 00:16:34.369 ], 00:16:34.369 "driver_specific": {} 00:16:34.369 } 00:16:34.369 ] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 BaseBdev4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 [ 00:16:34.369 { 00:16:34.369 "name": "BaseBdev4", 00:16:34.369 "aliases": [ 00:16:34.369 "5fba6435-cfc9-4396-a0b1-9bfec3e216d6" 00:16:34.369 ], 00:16:34.369 "product_name": "Malloc disk", 00:16:34.369 "block_size": 512, 00:16:34.369 "num_blocks": 65536, 00:16:34.369 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:34.369 "assigned_rate_limits": { 00:16:34.369 "rw_ios_per_sec": 0, 00:16:34.369 "rw_mbytes_per_sec": 0, 00:16:34.369 "r_mbytes_per_sec": 0, 00:16:34.369 "w_mbytes_per_sec": 0 00:16:34.369 }, 00:16:34.369 "claimed": false, 00:16:34.369 "zoned": false, 00:16:34.369 "supported_io_types": { 00:16:34.369 "read": true, 00:16:34.369 "write": true, 00:16:34.369 "unmap": true, 00:16:34.369 "flush": true, 00:16:34.369 "reset": true, 00:16:34.369 "nvme_admin": false, 00:16:34.369 "nvme_io": false, 00:16:34.369 "nvme_io_md": false, 00:16:34.369 "write_zeroes": true, 00:16:34.369 "zcopy": true, 00:16:34.369 "get_zone_info": false, 00:16:34.369 "zone_management": false, 00:16:34.369 "zone_append": false, 00:16:34.369 "compare": false, 00:16:34.369 "compare_and_write": false, 00:16:34.369 "abort": true, 00:16:34.369 "seek_hole": false, 00:16:34.369 "seek_data": false, 00:16:34.369 "copy": true, 00:16:34.369 "nvme_iov_md": false 00:16:34.369 }, 00:16:34.369 "memory_domains": [ 00:16:34.369 { 00:16:34.369 "dma_device_id": "system", 00:16:34.369 "dma_device_type": 1 00:16:34.369 }, 00:16:34.369 { 00:16:34.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.369 "dma_device_type": 2 00:16:34.369 } 00:16:34.369 ], 00:16:34.369 "driver_specific": {} 00:16:34.369 } 00:16:34.369 ] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.369 [2024-11-18 13:33:04.325585] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.369 [2024-11-18 13:33:04.325668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.369 [2024-11-18 13:33:04.325709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.369 [2024-11-18 13:33:04.327480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.369 [2024-11-18 13:33:04.327569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.369 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.370 "name": "Existed_Raid", 00:16:34.370 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:34.370 "strip_size_kb": 64, 00:16:34.370 "state": "configuring", 00:16:34.370 "raid_level": "raid5f", 00:16:34.370 "superblock": true, 00:16:34.370 "num_base_bdevs": 4, 00:16:34.370 "num_base_bdevs_discovered": 3, 00:16:34.370 "num_base_bdevs_operational": 4, 00:16:34.370 "base_bdevs_list": [ 00:16:34.370 { 00:16:34.370 "name": "BaseBdev1", 00:16:34.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.370 "is_configured": false, 00:16:34.370 "data_offset": 0, 00:16:34.370 "data_size": 0 00:16:34.370 }, 00:16:34.370 { 00:16:34.370 "name": "BaseBdev2", 00:16:34.370 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:34.370 "is_configured": true, 00:16:34.370 "data_offset": 2048, 00:16:34.370 "data_size": 63488 00:16:34.370 }, 00:16:34.370 { 00:16:34.370 "name": "BaseBdev3", 00:16:34.370 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:34.370 "is_configured": true, 00:16:34.370 "data_offset": 2048, 00:16:34.370 "data_size": 63488 00:16:34.370 }, 00:16:34.370 { 00:16:34.370 "name": "BaseBdev4", 00:16:34.370 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:34.370 "is_configured": true, 00:16:34.370 "data_offset": 2048, 00:16:34.370 "data_size": 63488 00:16:34.370 } 00:16:34.370 ] 00:16:34.370 }' 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.370 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.940 [2024-11-18 13:33:04.740870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.940 "name": "Existed_Raid", 00:16:34.940 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:34.940 "strip_size_kb": 64, 00:16:34.940 "state": "configuring", 00:16:34.940 "raid_level": "raid5f", 00:16:34.940 "superblock": true, 00:16:34.940 "num_base_bdevs": 4, 00:16:34.940 "num_base_bdevs_discovered": 2, 00:16:34.940 "num_base_bdevs_operational": 4, 00:16:34.940 "base_bdevs_list": [ 00:16:34.940 { 00:16:34.940 "name": "BaseBdev1", 00:16:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.940 "is_configured": false, 00:16:34.940 "data_offset": 0, 00:16:34.940 "data_size": 0 00:16:34.940 }, 00:16:34.940 { 00:16:34.940 "name": null, 00:16:34.940 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:34.940 "is_configured": false, 00:16:34.940 "data_offset": 0, 00:16:34.940 "data_size": 63488 00:16:34.940 }, 00:16:34.940 { 00:16:34.940 "name": "BaseBdev3", 00:16:34.940 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:34.940 "is_configured": true, 00:16:34.940 "data_offset": 2048, 00:16:34.940 "data_size": 63488 00:16:34.940 }, 00:16:34.940 { 00:16:34.940 "name": "BaseBdev4", 00:16:34.940 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:34.940 "is_configured": true, 00:16:34.940 "data_offset": 2048, 00:16:34.940 "data_size": 63488 00:16:34.940 } 00:16:34.940 ] 00:16:34.940 }' 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.940 13:33:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.200 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.200 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:35.200 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.200 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.200 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.460 [2024-11-18 13:33:05.291407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.460 BaseBdev1 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.460 [ 00:16:35.460 { 00:16:35.460 "name": "BaseBdev1", 00:16:35.460 "aliases": [ 00:16:35.460 "fbdaa56e-79ec-4f05-83e2-c86ce7389b89" 00:16:35.460 ], 00:16:35.460 "product_name": "Malloc disk", 00:16:35.460 "block_size": 512, 00:16:35.460 "num_blocks": 65536, 00:16:35.460 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:35.460 "assigned_rate_limits": { 00:16:35.460 "rw_ios_per_sec": 0, 00:16:35.460 "rw_mbytes_per_sec": 0, 00:16:35.460 "r_mbytes_per_sec": 0, 00:16:35.460 "w_mbytes_per_sec": 0 00:16:35.460 }, 00:16:35.460 "claimed": true, 00:16:35.460 "claim_type": "exclusive_write", 00:16:35.460 "zoned": false, 00:16:35.460 "supported_io_types": { 00:16:35.460 "read": true, 00:16:35.460 "write": true, 00:16:35.460 "unmap": true, 00:16:35.460 "flush": true, 00:16:35.460 "reset": true, 00:16:35.460 "nvme_admin": false, 00:16:35.460 "nvme_io": false, 00:16:35.460 "nvme_io_md": false, 00:16:35.460 "write_zeroes": true, 00:16:35.460 "zcopy": true, 00:16:35.460 "get_zone_info": false, 00:16:35.460 "zone_management": false, 00:16:35.460 "zone_append": false, 00:16:35.460 "compare": false, 00:16:35.460 "compare_and_write": false, 00:16:35.460 "abort": true, 00:16:35.460 "seek_hole": false, 00:16:35.460 "seek_data": false, 00:16:35.460 "copy": true, 00:16:35.460 "nvme_iov_md": false 00:16:35.460 }, 00:16:35.460 "memory_domains": [ 00:16:35.460 { 00:16:35.460 "dma_device_id": "system", 00:16:35.460 "dma_device_type": 1 00:16:35.460 }, 00:16:35.460 { 00:16:35.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.460 "dma_device_type": 2 00:16:35.460 } 00:16:35.460 ], 00:16:35.460 "driver_specific": {} 00:16:35.460 } 00:16:35.460 ] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.460 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.461 "name": "Existed_Raid", 00:16:35.461 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:35.461 "strip_size_kb": 64, 00:16:35.461 "state": "configuring", 00:16:35.461 "raid_level": "raid5f", 00:16:35.461 "superblock": true, 00:16:35.461 "num_base_bdevs": 4, 00:16:35.461 "num_base_bdevs_discovered": 3, 00:16:35.461 "num_base_bdevs_operational": 4, 00:16:35.461 "base_bdevs_list": [ 00:16:35.461 { 00:16:35.461 "name": "BaseBdev1", 00:16:35.461 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:35.461 "is_configured": true, 00:16:35.461 "data_offset": 2048, 00:16:35.461 "data_size": 63488 00:16:35.461 }, 00:16:35.461 { 00:16:35.461 "name": null, 00:16:35.461 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:35.461 "is_configured": false, 00:16:35.461 "data_offset": 0, 00:16:35.461 "data_size": 63488 00:16:35.461 }, 00:16:35.461 { 00:16:35.461 "name": "BaseBdev3", 00:16:35.461 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:35.461 "is_configured": true, 00:16:35.461 "data_offset": 2048, 00:16:35.461 "data_size": 63488 00:16:35.461 }, 00:16:35.461 { 00:16:35.461 "name": "BaseBdev4", 00:16:35.461 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:35.461 "is_configured": true, 00:16:35.461 "data_offset": 2048, 00:16:35.461 "data_size": 63488 00:16:35.461 } 00:16:35.461 ] 00:16:35.461 }' 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.461 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.031 [2024-11-18 13:33:05.838552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.031 "name": "Existed_Raid", 00:16:36.031 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:36.031 "strip_size_kb": 64, 00:16:36.031 "state": "configuring", 00:16:36.031 "raid_level": "raid5f", 00:16:36.031 "superblock": true, 00:16:36.031 "num_base_bdevs": 4, 00:16:36.031 "num_base_bdevs_discovered": 2, 00:16:36.031 "num_base_bdevs_operational": 4, 00:16:36.031 "base_bdevs_list": [ 00:16:36.031 { 00:16:36.031 "name": "BaseBdev1", 00:16:36.031 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:36.031 "is_configured": true, 00:16:36.031 "data_offset": 2048, 00:16:36.031 "data_size": 63488 00:16:36.031 }, 00:16:36.031 { 00:16:36.031 "name": null, 00:16:36.031 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:36.031 "is_configured": false, 00:16:36.031 "data_offset": 0, 00:16:36.031 "data_size": 63488 00:16:36.031 }, 00:16:36.031 { 00:16:36.031 "name": null, 00:16:36.031 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:36.031 "is_configured": false, 00:16:36.031 "data_offset": 0, 00:16:36.031 "data_size": 63488 00:16:36.031 }, 00:16:36.031 { 00:16:36.031 "name": "BaseBdev4", 00:16:36.031 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:36.031 "is_configured": true, 00:16:36.031 "data_offset": 2048, 00:16:36.031 "data_size": 63488 00:16:36.031 } 00:16:36.031 ] 00:16:36.031 }' 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.031 13:33:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.291 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.291 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.291 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.291 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.291 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.551 [2024-11-18 13:33:06.373611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.551 "name": "Existed_Raid", 00:16:36.551 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:36.551 "strip_size_kb": 64, 00:16:36.551 "state": "configuring", 00:16:36.551 "raid_level": "raid5f", 00:16:36.551 "superblock": true, 00:16:36.551 "num_base_bdevs": 4, 00:16:36.551 "num_base_bdevs_discovered": 3, 00:16:36.551 "num_base_bdevs_operational": 4, 00:16:36.551 "base_bdevs_list": [ 00:16:36.551 { 00:16:36.551 "name": "BaseBdev1", 00:16:36.551 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:36.551 "is_configured": true, 00:16:36.551 "data_offset": 2048, 00:16:36.551 "data_size": 63488 00:16:36.551 }, 00:16:36.551 { 00:16:36.551 "name": null, 00:16:36.551 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:36.551 "is_configured": false, 00:16:36.551 "data_offset": 0, 00:16:36.551 "data_size": 63488 00:16:36.551 }, 00:16:36.551 { 00:16:36.551 "name": "BaseBdev3", 00:16:36.551 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:36.551 "is_configured": true, 00:16:36.551 "data_offset": 2048, 00:16:36.551 "data_size": 63488 00:16:36.551 }, 00:16:36.551 { 00:16:36.551 "name": "BaseBdev4", 00:16:36.551 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:36.551 "is_configured": true, 00:16:36.551 "data_offset": 2048, 00:16:36.551 "data_size": 63488 00:16:36.551 } 00:16:36.551 ] 00:16:36.551 }' 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.551 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.812 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.812 [2024-11-18 13:33:06.848801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.072 "name": "Existed_Raid", 00:16:37.072 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:37.072 "strip_size_kb": 64, 00:16:37.072 "state": "configuring", 00:16:37.072 "raid_level": "raid5f", 00:16:37.072 "superblock": true, 00:16:37.072 "num_base_bdevs": 4, 00:16:37.072 "num_base_bdevs_discovered": 2, 00:16:37.072 "num_base_bdevs_operational": 4, 00:16:37.072 "base_bdevs_list": [ 00:16:37.072 { 00:16:37.072 "name": null, 00:16:37.072 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:37.072 "is_configured": false, 00:16:37.072 "data_offset": 0, 00:16:37.072 "data_size": 63488 00:16:37.072 }, 00:16:37.072 { 00:16:37.072 "name": null, 00:16:37.072 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:37.072 "is_configured": false, 00:16:37.072 "data_offset": 0, 00:16:37.072 "data_size": 63488 00:16:37.072 }, 00:16:37.072 { 00:16:37.072 "name": "BaseBdev3", 00:16:37.072 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:37.072 "is_configured": true, 00:16:37.072 "data_offset": 2048, 00:16:37.072 "data_size": 63488 00:16:37.072 }, 00:16:37.072 { 00:16:37.072 "name": "BaseBdev4", 00:16:37.072 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:37.072 "is_configured": true, 00:16:37.072 "data_offset": 2048, 00:16:37.072 "data_size": 63488 00:16:37.072 } 00:16:37.072 ] 00:16:37.072 }' 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.072 13:33:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.641 [2024-11-18 13:33:07.448235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.641 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.642 "name": "Existed_Raid", 00:16:37.642 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:37.642 "strip_size_kb": 64, 00:16:37.642 "state": "configuring", 00:16:37.642 "raid_level": "raid5f", 00:16:37.642 "superblock": true, 00:16:37.642 "num_base_bdevs": 4, 00:16:37.642 "num_base_bdevs_discovered": 3, 00:16:37.642 "num_base_bdevs_operational": 4, 00:16:37.642 "base_bdevs_list": [ 00:16:37.642 { 00:16:37.642 "name": null, 00:16:37.642 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:37.642 "is_configured": false, 00:16:37.642 "data_offset": 0, 00:16:37.642 "data_size": 63488 00:16:37.642 }, 00:16:37.642 { 00:16:37.642 "name": "BaseBdev2", 00:16:37.642 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:37.642 "is_configured": true, 00:16:37.642 "data_offset": 2048, 00:16:37.642 "data_size": 63488 00:16:37.642 }, 00:16:37.642 { 00:16:37.642 "name": "BaseBdev3", 00:16:37.642 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:37.642 "is_configured": true, 00:16:37.642 "data_offset": 2048, 00:16:37.642 "data_size": 63488 00:16:37.642 }, 00:16:37.642 { 00:16:37.642 "name": "BaseBdev4", 00:16:37.642 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:37.642 "is_configured": true, 00:16:37.642 "data_offset": 2048, 00:16:37.642 "data_size": 63488 00:16:37.642 } 00:16:37.642 ] 00:16:37.642 }' 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.642 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.902 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.902 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.902 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.902 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.902 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:38.162 13:33:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fbdaa56e-79ec-4f05-83e2-c86ce7389b89 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 [2024-11-18 13:33:08.043751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.162 [2024-11-18 13:33:08.044051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:38.162 [2024-11-18 13:33:08.044099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:38.162 [2024-11-18 13:33:08.044402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:38.162 NewBaseBdev 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 [2024-11-18 13:33:08.051411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:38.162 [2024-11-18 13:33:08.051436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:38.162 [2024-11-18 13:33:08.051666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.162 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 [ 00:16:38.162 { 00:16:38.162 "name": "NewBaseBdev", 00:16:38.162 "aliases": [ 00:16:38.162 "fbdaa56e-79ec-4f05-83e2-c86ce7389b89" 00:16:38.162 ], 00:16:38.162 "product_name": "Malloc disk", 00:16:38.162 "block_size": 512, 00:16:38.162 "num_blocks": 65536, 00:16:38.162 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:38.162 "assigned_rate_limits": { 00:16:38.162 "rw_ios_per_sec": 0, 00:16:38.162 "rw_mbytes_per_sec": 0, 00:16:38.162 "r_mbytes_per_sec": 0, 00:16:38.162 "w_mbytes_per_sec": 0 00:16:38.162 }, 00:16:38.162 "claimed": true, 00:16:38.162 "claim_type": "exclusive_write", 00:16:38.162 "zoned": false, 00:16:38.163 "supported_io_types": { 00:16:38.163 "read": true, 00:16:38.163 "write": true, 00:16:38.163 "unmap": true, 00:16:38.163 "flush": true, 00:16:38.163 "reset": true, 00:16:38.163 "nvme_admin": false, 00:16:38.163 "nvme_io": false, 00:16:38.163 "nvme_io_md": false, 00:16:38.163 "write_zeroes": true, 00:16:38.163 "zcopy": true, 00:16:38.163 "get_zone_info": false, 00:16:38.163 "zone_management": false, 00:16:38.163 "zone_append": false, 00:16:38.163 "compare": false, 00:16:38.163 "compare_and_write": false, 00:16:38.163 "abort": true, 00:16:38.163 "seek_hole": false, 00:16:38.163 "seek_data": false, 00:16:38.163 "copy": true, 00:16:38.163 "nvme_iov_md": false 00:16:38.163 }, 00:16:38.163 "memory_domains": [ 00:16:38.163 { 00:16:38.163 "dma_device_id": "system", 00:16:38.163 "dma_device_type": 1 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.163 "dma_device_type": 2 00:16:38.163 } 00:16:38.163 ], 00:16:38.163 "driver_specific": {} 00:16:38.163 } 00:16:38.163 ] 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.163 "name": "Existed_Raid", 00:16:38.163 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:38.163 "strip_size_kb": 64, 00:16:38.163 "state": "online", 00:16:38.163 "raid_level": "raid5f", 00:16:38.163 "superblock": true, 00:16:38.163 "num_base_bdevs": 4, 00:16:38.163 "num_base_bdevs_discovered": 4, 00:16:38.163 "num_base_bdevs_operational": 4, 00:16:38.163 "base_bdevs_list": [ 00:16:38.163 { 00:16:38.163 "name": "NewBaseBdev", 00:16:38.163 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev2", 00:16:38.163 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev3", 00:16:38.163 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev4", 00:16:38.163 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 } 00:16:38.163 ] 00:16:38.163 }' 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.163 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.733 [2024-11-18 13:33:08.503270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.733 "name": "Existed_Raid", 00:16:38.733 "aliases": [ 00:16:38.733 "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4" 00:16:38.733 ], 00:16:38.733 "product_name": "Raid Volume", 00:16:38.733 "block_size": 512, 00:16:38.733 "num_blocks": 190464, 00:16:38.733 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:38.733 "assigned_rate_limits": { 00:16:38.733 "rw_ios_per_sec": 0, 00:16:38.733 "rw_mbytes_per_sec": 0, 00:16:38.733 "r_mbytes_per_sec": 0, 00:16:38.733 "w_mbytes_per_sec": 0 00:16:38.733 }, 00:16:38.733 "claimed": false, 00:16:38.733 "zoned": false, 00:16:38.733 "supported_io_types": { 00:16:38.733 "read": true, 00:16:38.733 "write": true, 00:16:38.733 "unmap": false, 00:16:38.733 "flush": false, 00:16:38.733 "reset": true, 00:16:38.733 "nvme_admin": false, 00:16:38.733 "nvme_io": false, 00:16:38.733 "nvme_io_md": false, 00:16:38.733 "write_zeroes": true, 00:16:38.733 "zcopy": false, 00:16:38.733 "get_zone_info": false, 00:16:38.733 "zone_management": false, 00:16:38.733 "zone_append": false, 00:16:38.733 "compare": false, 00:16:38.733 "compare_and_write": false, 00:16:38.733 "abort": false, 00:16:38.733 "seek_hole": false, 00:16:38.733 "seek_data": false, 00:16:38.733 "copy": false, 00:16:38.733 "nvme_iov_md": false 00:16:38.733 }, 00:16:38.733 "driver_specific": { 00:16:38.733 "raid": { 00:16:38.733 "uuid": "47eb97c4-6ff1-4ec1-ba83-fdebeee643e4", 00:16:38.733 "strip_size_kb": 64, 00:16:38.733 "state": "online", 00:16:38.733 "raid_level": "raid5f", 00:16:38.733 "superblock": true, 00:16:38.733 "num_base_bdevs": 4, 00:16:38.733 "num_base_bdevs_discovered": 4, 00:16:38.733 "num_base_bdevs_operational": 4, 00:16:38.733 "base_bdevs_list": [ 00:16:38.733 { 00:16:38.733 "name": "NewBaseBdev", 00:16:38.733 "uuid": "fbdaa56e-79ec-4f05-83e2-c86ce7389b89", 00:16:38.733 "is_configured": true, 00:16:38.733 "data_offset": 2048, 00:16:38.733 "data_size": 63488 00:16:38.733 }, 00:16:38.733 { 00:16:38.733 "name": "BaseBdev2", 00:16:38.733 "uuid": "4c223693-316a-4904-9a0a-20a91b970b74", 00:16:38.733 "is_configured": true, 00:16:38.733 "data_offset": 2048, 00:16:38.733 "data_size": 63488 00:16:38.733 }, 00:16:38.733 { 00:16:38.733 "name": "BaseBdev3", 00:16:38.733 "uuid": "762d8565-6a66-46c9-8991-2bb5495610e4", 00:16:38.733 "is_configured": true, 00:16:38.733 "data_offset": 2048, 00:16:38.733 "data_size": 63488 00:16:38.733 }, 00:16:38.733 { 00:16:38.733 "name": "BaseBdev4", 00:16:38.733 "uuid": "5fba6435-cfc9-4396-a0b1-9bfec3e216d6", 00:16:38.733 "is_configured": true, 00:16:38.733 "data_offset": 2048, 00:16:38.733 "data_size": 63488 00:16:38.733 } 00:16:38.733 ] 00:16:38.733 } 00:16:38.733 } 00:16:38.733 }' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:38.733 BaseBdev2 00:16:38.733 BaseBdev3 00:16:38.733 BaseBdev4' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.733 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.734 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.994 [2024-11-18 13:33:08.850441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.994 [2024-11-18 13:33:08.850504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.994 [2024-11-18 13:33:08.850592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.994 [2024-11-18 13:33:08.850912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.994 [2024-11-18 13:33:08.850966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83385 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83385 ']' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83385 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83385 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83385' 00:16:38.994 killing process with pid 83385 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83385 00:16:38.994 [2024-11-18 13:33:08.896190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.994 13:33:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83385 00:16:39.254 [2024-11-18 13:33:09.271049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.638 13:33:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:40.638 00:16:40.638 real 0m11.510s 00:16:40.638 user 0m18.349s 00:16:40.638 sys 0m2.192s 00:16:40.638 13:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.638 13:33:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.638 ************************************ 00:16:40.638 END TEST raid5f_state_function_test_sb 00:16:40.638 ************************************ 00:16:40.638 13:33:10 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:40.638 13:33:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:40.638 13:33:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.638 13:33:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.638 ************************************ 00:16:40.638 START TEST raid5f_superblock_test 00:16:40.638 ************************************ 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84058 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84058 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84058 ']' 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.638 13:33:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.638 [2024-11-18 13:33:10.473641] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:40.638 [2024-11-18 13:33:10.473828] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84058 ] 00:16:40.638 [2024-11-18 13:33:10.651677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.898 [2024-11-18 13:33:10.755244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.898 [2024-11-18 13:33:10.936434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.898 [2024-11-18 13:33:10.936481] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 malloc1 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 [2024-11-18 13:33:11.357087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.469 [2024-11-18 13:33:11.357174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.469 [2024-11-18 13:33:11.357199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:41.469 [2024-11-18 13:33:11.357207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.469 [2024-11-18 13:33:11.359274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.469 [2024-11-18 13:33:11.359309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.469 pt1 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 malloc2 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 [2024-11-18 13:33:11.409476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.469 [2024-11-18 13:33:11.409573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.469 [2024-11-18 13:33:11.409610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:41.469 [2024-11-18 13:33:11.409638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.469 [2024-11-18 13:33:11.411620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.469 [2024-11-18 13:33:11.411691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.469 pt2 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 malloc3 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 [2024-11-18 13:33:11.480592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.469 [2024-11-18 13:33:11.480674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.469 [2024-11-18 13:33:11.480726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:41.469 [2024-11-18 13:33:11.480753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.469 [2024-11-18 13:33:11.482701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.469 [2024-11-18 13:33:11.482770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.469 pt3 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:41.469 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.470 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.470 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.470 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:41.470 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.470 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.729 malloc4 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.729 [2024-11-18 13:33:11.533510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:41.729 [2024-11-18 13:33:11.533555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.729 [2024-11-18 13:33:11.533588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:41.729 [2024-11-18 13:33:11.533596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.729 [2024-11-18 13:33:11.535557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.729 [2024-11-18 13:33:11.535594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:41.729 pt4 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.729 [2024-11-18 13:33:11.545521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.729 [2024-11-18 13:33:11.547250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.729 [2024-11-18 13:33:11.547315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.729 [2024-11-18 13:33:11.547375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:41.729 [2024-11-18 13:33:11.547559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:41.729 [2024-11-18 13:33:11.547574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:41.729 [2024-11-18 13:33:11.547801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:41.729 [2024-11-18 13:33:11.554400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:41.729 [2024-11-18 13:33:11.554421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:41.729 [2024-11-18 13:33:11.554589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.729 "name": "raid_bdev1", 00:16:41.729 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:41.729 "strip_size_kb": 64, 00:16:41.729 "state": "online", 00:16:41.729 "raid_level": "raid5f", 00:16:41.729 "superblock": true, 00:16:41.729 "num_base_bdevs": 4, 00:16:41.729 "num_base_bdevs_discovered": 4, 00:16:41.729 "num_base_bdevs_operational": 4, 00:16:41.729 "base_bdevs_list": [ 00:16:41.729 { 00:16:41.729 "name": "pt1", 00:16:41.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.729 "is_configured": true, 00:16:41.729 "data_offset": 2048, 00:16:41.729 "data_size": 63488 00:16:41.729 }, 00:16:41.729 { 00:16:41.729 "name": "pt2", 00:16:41.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.729 "is_configured": true, 00:16:41.729 "data_offset": 2048, 00:16:41.729 "data_size": 63488 00:16:41.729 }, 00:16:41.729 { 00:16:41.729 "name": "pt3", 00:16:41.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.729 "is_configured": true, 00:16:41.729 "data_offset": 2048, 00:16:41.729 "data_size": 63488 00:16:41.729 }, 00:16:41.729 { 00:16:41.729 "name": "pt4", 00:16:41.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.729 "is_configured": true, 00:16:41.729 "data_offset": 2048, 00:16:41.729 "data_size": 63488 00:16:41.729 } 00:16:41.729 ] 00:16:41.729 }' 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.729 13:33:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.297 [2024-11-18 13:33:12.057969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.297 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.297 "name": "raid_bdev1", 00:16:42.297 "aliases": [ 00:16:42.297 "05518697-01dd-4189-bd60-c9dbdbff0903" 00:16:42.297 ], 00:16:42.297 "product_name": "Raid Volume", 00:16:42.297 "block_size": 512, 00:16:42.297 "num_blocks": 190464, 00:16:42.297 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:42.297 "assigned_rate_limits": { 00:16:42.297 "rw_ios_per_sec": 0, 00:16:42.297 "rw_mbytes_per_sec": 0, 00:16:42.297 "r_mbytes_per_sec": 0, 00:16:42.297 "w_mbytes_per_sec": 0 00:16:42.297 }, 00:16:42.297 "claimed": false, 00:16:42.297 "zoned": false, 00:16:42.297 "supported_io_types": { 00:16:42.297 "read": true, 00:16:42.297 "write": true, 00:16:42.297 "unmap": false, 00:16:42.297 "flush": false, 00:16:42.297 "reset": true, 00:16:42.297 "nvme_admin": false, 00:16:42.297 "nvme_io": false, 00:16:42.297 "nvme_io_md": false, 00:16:42.297 "write_zeroes": true, 00:16:42.297 "zcopy": false, 00:16:42.297 "get_zone_info": false, 00:16:42.297 "zone_management": false, 00:16:42.297 "zone_append": false, 00:16:42.297 "compare": false, 00:16:42.297 "compare_and_write": false, 00:16:42.297 "abort": false, 00:16:42.297 "seek_hole": false, 00:16:42.297 "seek_data": false, 00:16:42.297 "copy": false, 00:16:42.297 "nvme_iov_md": false 00:16:42.297 }, 00:16:42.297 "driver_specific": { 00:16:42.297 "raid": { 00:16:42.297 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:42.298 "strip_size_kb": 64, 00:16:42.298 "state": "online", 00:16:42.298 "raid_level": "raid5f", 00:16:42.298 "superblock": true, 00:16:42.298 "num_base_bdevs": 4, 00:16:42.298 "num_base_bdevs_discovered": 4, 00:16:42.298 "num_base_bdevs_operational": 4, 00:16:42.298 "base_bdevs_list": [ 00:16:42.298 { 00:16:42.298 "name": "pt1", 00:16:42.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.298 "is_configured": true, 00:16:42.298 "data_offset": 2048, 00:16:42.298 "data_size": 63488 00:16:42.298 }, 00:16:42.298 { 00:16:42.298 "name": "pt2", 00:16:42.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.298 "is_configured": true, 00:16:42.298 "data_offset": 2048, 00:16:42.298 "data_size": 63488 00:16:42.298 }, 00:16:42.298 { 00:16:42.298 "name": "pt3", 00:16:42.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.298 "is_configured": true, 00:16:42.298 "data_offset": 2048, 00:16:42.298 "data_size": 63488 00:16:42.298 }, 00:16:42.298 { 00:16:42.298 "name": "pt4", 00:16:42.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.298 "is_configured": true, 00:16:42.298 "data_offset": 2048, 00:16:42.298 "data_size": 63488 00:16:42.298 } 00:16:42.298 ] 00:16:42.298 } 00:16:42.298 } 00:16:42.298 }' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:42.298 pt2 00:16:42.298 pt3 00:16:42.298 pt4' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.298 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.558 [2024-11-18 13:33:12.365408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=05518697-01dd-4189-bd60-c9dbdbff0903 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 05518697-01dd-4189-bd60-c9dbdbff0903 ']' 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.558 [2024-11-18 13:33:12.401216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.558 [2024-11-18 13:33:12.401273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.558 [2024-11-18 13:33:12.401361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.558 [2024-11-18 13:33:12.401449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.558 [2024-11-18 13:33:12.401513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.558 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 [2024-11-18 13:33:12.556963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:42.559 [2024-11-18 13:33:12.558848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:42.559 [2024-11-18 13:33:12.558962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:42.559 [2024-11-18 13:33:12.559013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:42.559 [2024-11-18 13:33:12.559062] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:42.559 [2024-11-18 13:33:12.559104] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:42.559 [2024-11-18 13:33:12.559122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:42.559 [2024-11-18 13:33:12.559153] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:42.559 [2024-11-18 13:33:12.559165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.559 [2024-11-18 13:33:12.559175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:42.559 request: 00:16:42.559 { 00:16:42.559 "name": "raid_bdev1", 00:16:42.559 "raid_level": "raid5f", 00:16:42.559 "base_bdevs": [ 00:16:42.559 "malloc1", 00:16:42.559 "malloc2", 00:16:42.559 "malloc3", 00:16:42.559 "malloc4" 00:16:42.559 ], 00:16:42.559 "strip_size_kb": 64, 00:16:42.559 "superblock": false, 00:16:42.559 "method": "bdev_raid_create", 00:16:42.559 "req_id": 1 00:16:42.559 } 00:16:42.559 Got JSON-RPC error response 00:16:42.559 response: 00:16:42.559 { 00:16:42.559 "code": -17, 00:16:42.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:42.559 } 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:42.559 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.819 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:42.819 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:42.819 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.820 [2024-11-18 13:33:12.624822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.820 [2024-11-18 13:33:12.624906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.820 [2024-11-18 13:33:12.624953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:42.820 [2024-11-18 13:33:12.624982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.820 [2024-11-18 13:33:12.627104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.820 [2024-11-18 13:33:12.627199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.820 [2024-11-18 13:33:12.627292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.820 [2024-11-18 13:33:12.627375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.820 pt1 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.820 "name": "raid_bdev1", 00:16:42.820 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:42.820 "strip_size_kb": 64, 00:16:42.820 "state": "configuring", 00:16:42.820 "raid_level": "raid5f", 00:16:42.820 "superblock": true, 00:16:42.820 "num_base_bdevs": 4, 00:16:42.820 "num_base_bdevs_discovered": 1, 00:16:42.820 "num_base_bdevs_operational": 4, 00:16:42.820 "base_bdevs_list": [ 00:16:42.820 { 00:16:42.820 "name": "pt1", 00:16:42.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.820 "is_configured": true, 00:16:42.820 "data_offset": 2048, 00:16:42.820 "data_size": 63488 00:16:42.820 }, 00:16:42.820 { 00:16:42.820 "name": null, 00:16:42.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.820 "is_configured": false, 00:16:42.820 "data_offset": 2048, 00:16:42.820 "data_size": 63488 00:16:42.820 }, 00:16:42.820 { 00:16:42.820 "name": null, 00:16:42.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.820 "is_configured": false, 00:16:42.820 "data_offset": 2048, 00:16:42.820 "data_size": 63488 00:16:42.820 }, 00:16:42.820 { 00:16:42.820 "name": null, 00:16:42.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.820 "is_configured": false, 00:16:42.820 "data_offset": 2048, 00:16:42.820 "data_size": 63488 00:16:42.820 } 00:16:42.820 ] 00:16:42.820 }' 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.820 13:33:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.080 [2024-11-18 13:33:13.044110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.080 [2024-11-18 13:33:13.044181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.080 [2024-11-18 13:33:13.044209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:43.080 [2024-11-18 13:33:13.044219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.080 [2024-11-18 13:33:13.044601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.080 [2024-11-18 13:33:13.044629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.080 [2024-11-18 13:33:13.044692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:43.080 [2024-11-18 13:33:13.044712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.080 pt2 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.080 [2024-11-18 13:33:13.056106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.080 "name": "raid_bdev1", 00:16:43.080 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:43.080 "strip_size_kb": 64, 00:16:43.080 "state": "configuring", 00:16:43.080 "raid_level": "raid5f", 00:16:43.080 "superblock": true, 00:16:43.080 "num_base_bdevs": 4, 00:16:43.080 "num_base_bdevs_discovered": 1, 00:16:43.080 "num_base_bdevs_operational": 4, 00:16:43.080 "base_bdevs_list": [ 00:16:43.080 { 00:16:43.080 "name": "pt1", 00:16:43.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.080 "is_configured": true, 00:16:43.080 "data_offset": 2048, 00:16:43.080 "data_size": 63488 00:16:43.080 }, 00:16:43.080 { 00:16:43.080 "name": null, 00:16:43.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.080 "is_configured": false, 00:16:43.080 "data_offset": 0, 00:16:43.080 "data_size": 63488 00:16:43.080 }, 00:16:43.080 { 00:16:43.080 "name": null, 00:16:43.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.080 "is_configured": false, 00:16:43.080 "data_offset": 2048, 00:16:43.080 "data_size": 63488 00:16:43.080 }, 00:16:43.080 { 00:16:43.080 "name": null, 00:16:43.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.080 "is_configured": false, 00:16:43.080 "data_offset": 2048, 00:16:43.080 "data_size": 63488 00:16:43.080 } 00:16:43.080 ] 00:16:43.080 }' 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.080 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 [2024-11-18 13:33:13.503309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.651 [2024-11-18 13:33:13.503398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.651 [2024-11-18 13:33:13.503433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:43.651 [2024-11-18 13:33:13.503461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.651 [2024-11-18 13:33:13.503864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.651 [2024-11-18 13:33:13.503918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.651 [2024-11-18 13:33:13.504008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:43.651 [2024-11-18 13:33:13.504053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.651 pt2 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 [2024-11-18 13:33:13.515283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.651 [2024-11-18 13:33:13.515360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.651 [2024-11-18 13:33:13.515391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:43.651 [2024-11-18 13:33:13.515416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.651 [2024-11-18 13:33:13.515748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.651 [2024-11-18 13:33:13.515800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.651 [2024-11-18 13:33:13.515883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.651 [2024-11-18 13:33:13.515924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.651 pt3 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 [2024-11-18 13:33:13.527246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:43.651 [2024-11-18 13:33:13.527323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.651 [2024-11-18 13:33:13.527357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:43.651 [2024-11-18 13:33:13.527384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.651 [2024-11-18 13:33:13.527715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.651 [2024-11-18 13:33:13.527766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:43.651 [2024-11-18 13:33:13.527849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:43.651 [2024-11-18 13:33:13.527891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:43.651 [2024-11-18 13:33:13.528036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:43.651 [2024-11-18 13:33:13.528071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:43.651 [2024-11-18 13:33:13.528325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.651 [2024-11-18 13:33:13.535253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:43.651 [2024-11-18 13:33:13.535274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:43.651 [2024-11-18 13:33:13.535439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.651 pt4 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.651 "name": "raid_bdev1", 00:16:43.651 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:43.651 "strip_size_kb": 64, 00:16:43.651 "state": "online", 00:16:43.651 "raid_level": "raid5f", 00:16:43.651 "superblock": true, 00:16:43.651 "num_base_bdevs": 4, 00:16:43.651 "num_base_bdevs_discovered": 4, 00:16:43.651 "num_base_bdevs_operational": 4, 00:16:43.651 "base_bdevs_list": [ 00:16:43.651 { 00:16:43.651 "name": "pt1", 00:16:43.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.651 "is_configured": true, 00:16:43.651 "data_offset": 2048, 00:16:43.651 "data_size": 63488 00:16:43.651 }, 00:16:43.651 { 00:16:43.651 "name": "pt2", 00:16:43.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.651 "is_configured": true, 00:16:43.651 "data_offset": 2048, 00:16:43.651 "data_size": 63488 00:16:43.651 }, 00:16:43.651 { 00:16:43.651 "name": "pt3", 00:16:43.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.651 "is_configured": true, 00:16:43.651 "data_offset": 2048, 00:16:43.651 "data_size": 63488 00:16:43.651 }, 00:16:43.651 { 00:16:43.651 "name": "pt4", 00:16:43.651 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:43.651 "is_configured": true, 00:16:43.651 "data_offset": 2048, 00:16:43.651 "data_size": 63488 00:16:43.651 } 00:16:43.651 ] 00:16:43.651 }' 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.651 13:33:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.221 [2024-11-18 13:33:14.014930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.221 "name": "raid_bdev1", 00:16:44.221 "aliases": [ 00:16:44.221 "05518697-01dd-4189-bd60-c9dbdbff0903" 00:16:44.221 ], 00:16:44.221 "product_name": "Raid Volume", 00:16:44.221 "block_size": 512, 00:16:44.221 "num_blocks": 190464, 00:16:44.221 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:44.221 "assigned_rate_limits": { 00:16:44.221 "rw_ios_per_sec": 0, 00:16:44.221 "rw_mbytes_per_sec": 0, 00:16:44.221 "r_mbytes_per_sec": 0, 00:16:44.221 "w_mbytes_per_sec": 0 00:16:44.221 }, 00:16:44.221 "claimed": false, 00:16:44.221 "zoned": false, 00:16:44.221 "supported_io_types": { 00:16:44.221 "read": true, 00:16:44.221 "write": true, 00:16:44.221 "unmap": false, 00:16:44.221 "flush": false, 00:16:44.221 "reset": true, 00:16:44.221 "nvme_admin": false, 00:16:44.221 "nvme_io": false, 00:16:44.221 "nvme_io_md": false, 00:16:44.221 "write_zeroes": true, 00:16:44.221 "zcopy": false, 00:16:44.221 "get_zone_info": false, 00:16:44.221 "zone_management": false, 00:16:44.221 "zone_append": false, 00:16:44.221 "compare": false, 00:16:44.221 "compare_and_write": false, 00:16:44.221 "abort": false, 00:16:44.221 "seek_hole": false, 00:16:44.221 "seek_data": false, 00:16:44.221 "copy": false, 00:16:44.221 "nvme_iov_md": false 00:16:44.221 }, 00:16:44.221 "driver_specific": { 00:16:44.221 "raid": { 00:16:44.221 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:44.221 "strip_size_kb": 64, 00:16:44.221 "state": "online", 00:16:44.221 "raid_level": "raid5f", 00:16:44.221 "superblock": true, 00:16:44.221 "num_base_bdevs": 4, 00:16:44.221 "num_base_bdevs_discovered": 4, 00:16:44.221 "num_base_bdevs_operational": 4, 00:16:44.221 "base_bdevs_list": [ 00:16:44.221 { 00:16:44.221 "name": "pt1", 00:16:44.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.221 "is_configured": true, 00:16:44.221 "data_offset": 2048, 00:16:44.221 "data_size": 63488 00:16:44.221 }, 00:16:44.221 { 00:16:44.221 "name": "pt2", 00:16:44.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.221 "is_configured": true, 00:16:44.221 "data_offset": 2048, 00:16:44.221 "data_size": 63488 00:16:44.221 }, 00:16:44.221 { 00:16:44.221 "name": "pt3", 00:16:44.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.221 "is_configured": true, 00:16:44.221 "data_offset": 2048, 00:16:44.221 "data_size": 63488 00:16:44.221 }, 00:16:44.221 { 00:16:44.221 "name": "pt4", 00:16:44.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.221 "is_configured": true, 00:16:44.221 "data_offset": 2048, 00:16:44.221 "data_size": 63488 00:16:44.221 } 00:16:44.221 ] 00:16:44.221 } 00:16:44.221 } 00:16:44.221 }' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:44.221 pt2 00:16:44.221 pt3 00:16:44.221 pt4' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.221 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.222 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:44.482 [2024-11-18 13:33:14.346332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 05518697-01dd-4189-bd60-c9dbdbff0903 '!=' 05518697-01dd-4189-bd60-c9dbdbff0903 ']' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.482 [2024-11-18 13:33:14.370180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.482 "name": "raid_bdev1", 00:16:44.482 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:44.482 "strip_size_kb": 64, 00:16:44.482 "state": "online", 00:16:44.482 "raid_level": "raid5f", 00:16:44.482 "superblock": true, 00:16:44.482 "num_base_bdevs": 4, 00:16:44.482 "num_base_bdevs_discovered": 3, 00:16:44.482 "num_base_bdevs_operational": 3, 00:16:44.482 "base_bdevs_list": [ 00:16:44.482 { 00:16:44.482 "name": null, 00:16:44.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.482 "is_configured": false, 00:16:44.482 "data_offset": 0, 00:16:44.482 "data_size": 63488 00:16:44.482 }, 00:16:44.482 { 00:16:44.482 "name": "pt2", 00:16:44.482 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.482 "is_configured": true, 00:16:44.482 "data_offset": 2048, 00:16:44.482 "data_size": 63488 00:16:44.482 }, 00:16:44.482 { 00:16:44.482 "name": "pt3", 00:16:44.482 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.482 "is_configured": true, 00:16:44.482 "data_offset": 2048, 00:16:44.482 "data_size": 63488 00:16:44.482 }, 00:16:44.482 { 00:16:44.482 "name": "pt4", 00:16:44.482 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.482 "is_configured": true, 00:16:44.482 "data_offset": 2048, 00:16:44.482 "data_size": 63488 00:16:44.482 } 00:16:44.482 ] 00:16:44.482 }' 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.482 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 [2024-11-18 13:33:14.845297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.052 [2024-11-18 13:33:14.845364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.052 [2024-11-18 13:33:14.845455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.052 [2024-11-18 13:33:14.845543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.052 [2024-11-18 13:33:14.845586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.052 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.053 [2024-11-18 13:33:14.941170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.053 [2024-11-18 13:33:14.941212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.053 [2024-11-18 13:33:14.941244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:45.053 [2024-11-18 13:33:14.941252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.053 [2024-11-18 13:33:14.943274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.053 [2024-11-18 13:33:14.943308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.053 [2024-11-18 13:33:14.943386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:45.053 [2024-11-18 13:33:14.943437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.053 pt2 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.053 "name": "raid_bdev1", 00:16:45.053 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:45.053 "strip_size_kb": 64, 00:16:45.053 "state": "configuring", 00:16:45.053 "raid_level": "raid5f", 00:16:45.053 "superblock": true, 00:16:45.053 "num_base_bdevs": 4, 00:16:45.053 "num_base_bdevs_discovered": 1, 00:16:45.053 "num_base_bdevs_operational": 3, 00:16:45.053 "base_bdevs_list": [ 00:16:45.053 { 00:16:45.053 "name": null, 00:16:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.053 "is_configured": false, 00:16:45.053 "data_offset": 2048, 00:16:45.053 "data_size": 63488 00:16:45.053 }, 00:16:45.053 { 00:16:45.053 "name": "pt2", 00:16:45.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.053 "is_configured": true, 00:16:45.053 "data_offset": 2048, 00:16:45.053 "data_size": 63488 00:16:45.053 }, 00:16:45.053 { 00:16:45.053 "name": null, 00:16:45.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.053 "is_configured": false, 00:16:45.053 "data_offset": 2048, 00:16:45.053 "data_size": 63488 00:16:45.053 }, 00:16:45.053 { 00:16:45.053 "name": null, 00:16:45.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.053 "is_configured": false, 00:16:45.053 "data_offset": 2048, 00:16:45.053 "data_size": 63488 00:16:45.053 } 00:16:45.053 ] 00:16:45.053 }' 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.053 13:33:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.313 [2024-11-18 13:33:15.352420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.313 [2024-11-18 13:33:15.352511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.313 [2024-11-18 13:33:15.352545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:45.313 [2024-11-18 13:33:15.352572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.313 [2024-11-18 13:33:15.352965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.313 [2024-11-18 13:33:15.353018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.313 [2024-11-18 13:33:15.353112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:45.313 [2024-11-18 13:33:15.353183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.313 pt3 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.313 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.573 "name": "raid_bdev1", 00:16:45.573 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:45.573 "strip_size_kb": 64, 00:16:45.573 "state": "configuring", 00:16:45.573 "raid_level": "raid5f", 00:16:45.573 "superblock": true, 00:16:45.573 "num_base_bdevs": 4, 00:16:45.573 "num_base_bdevs_discovered": 2, 00:16:45.573 "num_base_bdevs_operational": 3, 00:16:45.573 "base_bdevs_list": [ 00:16:45.573 { 00:16:45.573 "name": null, 00:16:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.573 "is_configured": false, 00:16:45.573 "data_offset": 2048, 00:16:45.573 "data_size": 63488 00:16:45.573 }, 00:16:45.573 { 00:16:45.573 "name": "pt2", 00:16:45.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.573 "is_configured": true, 00:16:45.573 "data_offset": 2048, 00:16:45.573 "data_size": 63488 00:16:45.573 }, 00:16:45.573 { 00:16:45.573 "name": "pt3", 00:16:45.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.573 "is_configured": true, 00:16:45.573 "data_offset": 2048, 00:16:45.573 "data_size": 63488 00:16:45.573 }, 00:16:45.573 { 00:16:45.573 "name": null, 00:16:45.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.573 "is_configured": false, 00:16:45.573 "data_offset": 2048, 00:16:45.573 "data_size": 63488 00:16:45.573 } 00:16:45.573 ] 00:16:45.573 }' 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.573 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.832 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:45.832 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.832 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.833 [2024-11-18 13:33:15.795701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:45.833 [2024-11-18 13:33:15.795747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.833 [2024-11-18 13:33:15.795763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:45.833 [2024-11-18 13:33:15.795771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.833 [2024-11-18 13:33:15.796138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.833 [2024-11-18 13:33:15.796155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:45.833 [2024-11-18 13:33:15.796225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:45.833 [2024-11-18 13:33:15.796242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:45.833 [2024-11-18 13:33:15.796352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.833 [2024-11-18 13:33:15.796360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:45.833 [2024-11-18 13:33:15.796590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:45.833 [2024-11-18 13:33:15.803696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.833 [2024-11-18 13:33:15.803723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:45.833 [2024-11-18 13:33:15.803986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.833 pt4 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.833 "name": "raid_bdev1", 00:16:45.833 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:45.833 "strip_size_kb": 64, 00:16:45.833 "state": "online", 00:16:45.833 "raid_level": "raid5f", 00:16:45.833 "superblock": true, 00:16:45.833 "num_base_bdevs": 4, 00:16:45.833 "num_base_bdevs_discovered": 3, 00:16:45.833 "num_base_bdevs_operational": 3, 00:16:45.833 "base_bdevs_list": [ 00:16:45.833 { 00:16:45.833 "name": null, 00:16:45.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.833 "is_configured": false, 00:16:45.833 "data_offset": 2048, 00:16:45.833 "data_size": 63488 00:16:45.833 }, 00:16:45.833 { 00:16:45.833 "name": "pt2", 00:16:45.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.833 "is_configured": true, 00:16:45.833 "data_offset": 2048, 00:16:45.833 "data_size": 63488 00:16:45.833 }, 00:16:45.833 { 00:16:45.833 "name": "pt3", 00:16:45.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.833 "is_configured": true, 00:16:45.833 "data_offset": 2048, 00:16:45.833 "data_size": 63488 00:16:45.833 }, 00:16:45.833 { 00:16:45.833 "name": "pt4", 00:16:45.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.833 "is_configured": true, 00:16:45.833 "data_offset": 2048, 00:16:45.833 "data_size": 63488 00:16:45.833 } 00:16:45.833 ] 00:16:45.833 }' 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.833 13:33:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.402 [2024-11-18 13:33:16.211324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.402 [2024-11-18 13:33:16.211353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.402 [2024-11-18 13:33:16.211423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.402 [2024-11-18 13:33:16.211492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.402 [2024-11-18 13:33:16.211505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.402 [2024-11-18 13:33:16.287197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.402 [2024-11-18 13:33:16.287246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.402 [2024-11-18 13:33:16.287268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:46.402 [2024-11-18 13:33:16.287279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.402 [2024-11-18 13:33:16.289372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.402 [2024-11-18 13:33:16.289407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.402 [2024-11-18 13:33:16.289478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:46.402 [2024-11-18 13:33:16.289525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:46.402 [2024-11-18 13:33:16.289636] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:46.402 [2024-11-18 13:33:16.289648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.402 [2024-11-18 13:33:16.289660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:46.402 [2024-11-18 13:33:16.289715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.402 [2024-11-18 13:33:16.289805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:46.402 pt1 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.402 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.403 "name": "raid_bdev1", 00:16:46.403 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:46.403 "strip_size_kb": 64, 00:16:46.403 "state": "configuring", 00:16:46.403 "raid_level": "raid5f", 00:16:46.403 "superblock": true, 00:16:46.403 "num_base_bdevs": 4, 00:16:46.403 "num_base_bdevs_discovered": 2, 00:16:46.403 "num_base_bdevs_operational": 3, 00:16:46.403 "base_bdevs_list": [ 00:16:46.403 { 00:16:46.403 "name": null, 00:16:46.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.403 "is_configured": false, 00:16:46.403 "data_offset": 2048, 00:16:46.403 "data_size": 63488 00:16:46.403 }, 00:16:46.403 { 00:16:46.403 "name": "pt2", 00:16:46.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.403 "is_configured": true, 00:16:46.403 "data_offset": 2048, 00:16:46.403 "data_size": 63488 00:16:46.403 }, 00:16:46.403 { 00:16:46.403 "name": "pt3", 00:16:46.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.403 "is_configured": true, 00:16:46.403 "data_offset": 2048, 00:16:46.403 "data_size": 63488 00:16:46.403 }, 00:16:46.403 { 00:16:46.403 "name": null, 00:16:46.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.403 "is_configured": false, 00:16:46.403 "data_offset": 2048, 00:16:46.403 "data_size": 63488 00:16:46.403 } 00:16:46.403 ] 00:16:46.403 }' 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.403 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.662 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:46.662 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:46.662 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.662 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.662 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 [2024-11-18 13:33:16.738489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:46.922 [2024-11-18 13:33:16.738531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.922 [2024-11-18 13:33:16.738551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:46.922 [2024-11-18 13:33:16.738560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.922 [2024-11-18 13:33:16.738917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.922 [2024-11-18 13:33:16.738936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:46.922 [2024-11-18 13:33:16.739002] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:46.922 [2024-11-18 13:33:16.739034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:46.922 [2024-11-18 13:33:16.739179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:46.922 [2024-11-18 13:33:16.739188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:46.922 [2024-11-18 13:33:16.739412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:46.922 [2024-11-18 13:33:16.746160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:46.922 [2024-11-18 13:33:16.746185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:46.922 [2024-11-18 13:33:16.746414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.922 pt4 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.922 "name": "raid_bdev1", 00:16:46.922 "uuid": "05518697-01dd-4189-bd60-c9dbdbff0903", 00:16:46.922 "strip_size_kb": 64, 00:16:46.922 "state": "online", 00:16:46.922 "raid_level": "raid5f", 00:16:46.922 "superblock": true, 00:16:46.922 "num_base_bdevs": 4, 00:16:46.922 "num_base_bdevs_discovered": 3, 00:16:46.922 "num_base_bdevs_operational": 3, 00:16:46.922 "base_bdevs_list": [ 00:16:46.922 { 00:16:46.922 "name": null, 00:16:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.922 "is_configured": false, 00:16:46.922 "data_offset": 2048, 00:16:46.922 "data_size": 63488 00:16:46.922 }, 00:16:46.922 { 00:16:46.922 "name": "pt2", 00:16:46.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.922 "is_configured": true, 00:16:46.922 "data_offset": 2048, 00:16:46.922 "data_size": 63488 00:16:46.922 }, 00:16:46.922 { 00:16:46.922 "name": "pt3", 00:16:46.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.922 "is_configured": true, 00:16:46.922 "data_offset": 2048, 00:16:46.922 "data_size": 63488 00:16:46.922 }, 00:16:46.922 { 00:16:46.922 "name": "pt4", 00:16:46.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.922 "is_configured": true, 00:16:46.922 "data_offset": 2048, 00:16:46.922 "data_size": 63488 00:16:46.922 } 00:16:46.922 ] 00:16:46.922 }' 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.922 13:33:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.182 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 [2024-11-18 13:33:17.237475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 05518697-01dd-4189-bd60-c9dbdbff0903 '!=' 05518697-01dd-4189-bd60-c9dbdbff0903 ']' 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84058 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84058 ']' 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84058 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84058 00:16:47.442 killing process with pid 84058 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84058' 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84058 00:16:47.442 [2024-11-18 13:33:17.321284] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.442 [2024-11-18 13:33:17.321362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.442 [2024-11-18 13:33:17.321425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.442 [2024-11-18 13:33:17.321435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:47.442 13:33:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84058 00:16:47.701 [2024-11-18 13:33:17.686146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.084 ************************************ 00:16:49.084 END TEST raid5f_superblock_test 00:16:49.084 ************************************ 00:16:49.084 13:33:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:49.084 00:16:49.084 real 0m8.339s 00:16:49.084 user 0m13.170s 00:16:49.084 sys 0m1.573s 00:16:49.084 13:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.084 13:33:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.084 13:33:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:49.084 13:33:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:49.084 13:33:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:49.084 13:33:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.084 13:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.084 ************************************ 00:16:49.084 START TEST raid5f_rebuild_test 00:16:49.084 ************************************ 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84538 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84538 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84538 ']' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.084 13:33:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.084 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.084 Zero copy mechanism will not be used. 00:16:49.084 [2024-11-18 13:33:18.909485] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:49.084 [2024-11-18 13:33:18.909601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84538 ] 00:16:49.084 [2024-11-18 13:33:19.082665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.343 [2024-11-18 13:33:19.189440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.343 [2024-11-18 13:33:19.367967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.343 [2024-11-18 13:33:19.368002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 BaseBdev1_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 [2024-11-18 13:33:19.758001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.912 [2024-11-18 13:33:19.758071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.912 [2024-11-18 13:33:19.758093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.912 [2024-11-18 13:33:19.758103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.912 [2024-11-18 13:33:19.760067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.912 [2024-11-18 13:33:19.760106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.912 BaseBdev1 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 BaseBdev2_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 [2024-11-18 13:33:19.811035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.912 [2024-11-18 13:33:19.811092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.912 [2024-11-18 13:33:19.811110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.912 [2024-11-18 13:33:19.811121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.912 [2024-11-18 13:33:19.813050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.912 [2024-11-18 13:33:19.813086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.912 BaseBdev2 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 BaseBdev3_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 [2024-11-18 13:33:19.895111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:49.912 [2024-11-18 13:33:19.895175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.912 [2024-11-18 13:33:19.895197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.912 [2024-11-18 13:33:19.895208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.912 [2024-11-18 13:33:19.897083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.912 [2024-11-18 13:33:19.897122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:49.912 BaseBdev3 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 BaseBdev4_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.912 [2024-11-18 13:33:19.946087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:49.912 [2024-11-18 13:33:19.946148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.912 [2024-11-18 13:33:19.946166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.912 [2024-11-18 13:33:19.946175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.912 [2024-11-18 13:33:19.948095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.912 [2024-11-18 13:33:19.948148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:49.912 BaseBdev4 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.912 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.172 spare_malloc 00:16:50.172 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.172 13:33:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.172 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.172 13:33:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.172 spare_delay 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.173 [2024-11-18 13:33:20.011287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.173 [2024-11-18 13:33:20.011342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.173 [2024-11-18 13:33:20.011361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:50.173 [2024-11-18 13:33:20.011372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.173 [2024-11-18 13:33:20.013284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.173 [2024-11-18 13:33:20.013318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.173 spare 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.173 [2024-11-18 13:33:20.023315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.173 [2024-11-18 13:33:20.024984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.173 [2024-11-18 13:33:20.025046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.173 [2024-11-18 13:33:20.025092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.173 [2024-11-18 13:33:20.025183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.173 [2024-11-18 13:33:20.025195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:50.173 [2024-11-18 13:33:20.025406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.173 [2024-11-18 13:33:20.032178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.173 [2024-11-18 13:33:20.032200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.173 [2024-11-18 13:33:20.032381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.173 "name": "raid_bdev1", 00:16:50.173 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:50.173 "strip_size_kb": 64, 00:16:50.173 "state": "online", 00:16:50.173 "raid_level": "raid5f", 00:16:50.173 "superblock": false, 00:16:50.173 "num_base_bdevs": 4, 00:16:50.173 "num_base_bdevs_discovered": 4, 00:16:50.173 "num_base_bdevs_operational": 4, 00:16:50.173 "base_bdevs_list": [ 00:16:50.173 { 00:16:50.173 "name": "BaseBdev1", 00:16:50.173 "uuid": "14cf101a-9892-5366-b4fe-800394e8ba08", 00:16:50.173 "is_configured": true, 00:16:50.173 "data_offset": 0, 00:16:50.173 "data_size": 65536 00:16:50.173 }, 00:16:50.173 { 00:16:50.173 "name": "BaseBdev2", 00:16:50.173 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:50.173 "is_configured": true, 00:16:50.173 "data_offset": 0, 00:16:50.173 "data_size": 65536 00:16:50.173 }, 00:16:50.173 { 00:16:50.173 "name": "BaseBdev3", 00:16:50.173 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:50.173 "is_configured": true, 00:16:50.173 "data_offset": 0, 00:16:50.173 "data_size": 65536 00:16:50.173 }, 00:16:50.173 { 00:16:50.173 "name": "BaseBdev4", 00:16:50.173 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:50.173 "is_configured": true, 00:16:50.173 "data_offset": 0, 00:16:50.173 "data_size": 65536 00:16:50.173 } 00:16:50.173 ] 00:16:50.173 }' 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.173 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.433 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.433 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.433 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.433 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.433 [2024-11-18 13:33:20.471319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.693 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:50.693 [2024-11-18 13:33:20.742889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:50.954 /dev/nbd0 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:50.954 1+0 records in 00:16:50.954 1+0 records out 00:16:50.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458028 s, 8.9 MB/s 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:50.954 13:33:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:51.524 512+0 records in 00:16:51.524 512+0 records out 00:16:51.524 100663296 bytes (101 MB, 96 MiB) copied, 0.453235 s, 222 MB/s 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.524 [2024-11-18 13:33:21.486312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.524 [2024-11-18 13:33:21.509034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.524 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.524 "name": "raid_bdev1", 00:16:51.524 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:51.524 "strip_size_kb": 64, 00:16:51.524 "state": "online", 00:16:51.524 "raid_level": "raid5f", 00:16:51.524 "superblock": false, 00:16:51.524 "num_base_bdevs": 4, 00:16:51.524 "num_base_bdevs_discovered": 3, 00:16:51.524 "num_base_bdevs_operational": 3, 00:16:51.524 "base_bdevs_list": [ 00:16:51.525 { 00:16:51.525 "name": null, 00:16:51.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.525 "is_configured": false, 00:16:51.525 "data_offset": 0, 00:16:51.525 "data_size": 65536 00:16:51.525 }, 00:16:51.525 { 00:16:51.525 "name": "BaseBdev2", 00:16:51.525 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:51.525 "is_configured": true, 00:16:51.525 "data_offset": 0, 00:16:51.525 "data_size": 65536 00:16:51.525 }, 00:16:51.525 { 00:16:51.525 "name": "BaseBdev3", 00:16:51.525 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:51.525 "is_configured": true, 00:16:51.525 "data_offset": 0, 00:16:51.525 "data_size": 65536 00:16:51.525 }, 00:16:51.525 { 00:16:51.525 "name": "BaseBdev4", 00:16:51.525 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:51.525 "is_configured": true, 00:16:51.525 "data_offset": 0, 00:16:51.525 "data_size": 65536 00:16:51.525 } 00:16:51.525 ] 00:16:51.525 }' 00:16:51.525 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.525 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.094 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.094 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.094 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.094 [2024-11-18 13:33:21.976241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.094 [2024-11-18 13:33:21.989442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:52.094 13:33:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.094 13:33:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:52.095 [2024-11-18 13:33:21.998145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.034 13:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.034 13:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.034 13:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.034 13:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.034 13:33:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.034 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.034 "name": "raid_bdev1", 00:16:53.034 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:53.034 "strip_size_kb": 64, 00:16:53.034 "state": "online", 00:16:53.034 "raid_level": "raid5f", 00:16:53.034 "superblock": false, 00:16:53.034 "num_base_bdevs": 4, 00:16:53.034 "num_base_bdevs_discovered": 4, 00:16:53.034 "num_base_bdevs_operational": 4, 00:16:53.034 "process": { 00:16:53.034 "type": "rebuild", 00:16:53.034 "target": "spare", 00:16:53.034 "progress": { 00:16:53.034 "blocks": 19200, 00:16:53.034 "percent": 9 00:16:53.034 } 00:16:53.034 }, 00:16:53.034 "base_bdevs_list": [ 00:16:53.034 { 00:16:53.034 "name": "spare", 00:16:53.034 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:53.034 "is_configured": true, 00:16:53.034 "data_offset": 0, 00:16:53.034 "data_size": 65536 00:16:53.034 }, 00:16:53.034 { 00:16:53.034 "name": "BaseBdev2", 00:16:53.034 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:53.034 "is_configured": true, 00:16:53.034 "data_offset": 0, 00:16:53.034 "data_size": 65536 00:16:53.034 }, 00:16:53.034 { 00:16:53.035 "name": "BaseBdev3", 00:16:53.035 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:53.035 "is_configured": true, 00:16:53.035 "data_offset": 0, 00:16:53.035 "data_size": 65536 00:16:53.035 }, 00:16:53.035 { 00:16:53.035 "name": "BaseBdev4", 00:16:53.035 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:53.035 "is_configured": true, 00:16:53.035 "data_offset": 0, 00:16:53.035 "data_size": 65536 00:16:53.035 } 00:16:53.035 ] 00:16:53.035 }' 00:16:53.035 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.294 [2024-11-18 13:33:23.148793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.294 [2024-11-18 13:33:23.203726] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:53.294 [2024-11-18 13:33:23.203790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.294 [2024-11-18 13:33:23.203806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.294 [2024-11-18 13:33:23.203815] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.294 "name": "raid_bdev1", 00:16:53.294 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:53.294 "strip_size_kb": 64, 00:16:53.294 "state": "online", 00:16:53.294 "raid_level": "raid5f", 00:16:53.294 "superblock": false, 00:16:53.294 "num_base_bdevs": 4, 00:16:53.294 "num_base_bdevs_discovered": 3, 00:16:53.294 "num_base_bdevs_operational": 3, 00:16:53.294 "base_bdevs_list": [ 00:16:53.294 { 00:16:53.294 "name": null, 00:16:53.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.294 "is_configured": false, 00:16:53.294 "data_offset": 0, 00:16:53.294 "data_size": 65536 00:16:53.294 }, 00:16:53.294 { 00:16:53.294 "name": "BaseBdev2", 00:16:53.294 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:53.294 "is_configured": true, 00:16:53.294 "data_offset": 0, 00:16:53.294 "data_size": 65536 00:16:53.294 }, 00:16:53.294 { 00:16:53.294 "name": "BaseBdev3", 00:16:53.294 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:53.294 "is_configured": true, 00:16:53.294 "data_offset": 0, 00:16:53.294 "data_size": 65536 00:16:53.294 }, 00:16:53.294 { 00:16:53.294 "name": "BaseBdev4", 00:16:53.294 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:53.294 "is_configured": true, 00:16:53.294 "data_offset": 0, 00:16:53.294 "data_size": 65536 00:16:53.294 } 00:16:53.294 ] 00:16:53.294 }' 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.294 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.890 "name": "raid_bdev1", 00:16:53.890 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:53.890 "strip_size_kb": 64, 00:16:53.890 "state": "online", 00:16:53.890 "raid_level": "raid5f", 00:16:53.890 "superblock": false, 00:16:53.890 "num_base_bdevs": 4, 00:16:53.890 "num_base_bdevs_discovered": 3, 00:16:53.890 "num_base_bdevs_operational": 3, 00:16:53.890 "base_bdevs_list": [ 00:16:53.890 { 00:16:53.890 "name": null, 00:16:53.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.890 "is_configured": false, 00:16:53.890 "data_offset": 0, 00:16:53.890 "data_size": 65536 00:16:53.890 }, 00:16:53.890 { 00:16:53.890 "name": "BaseBdev2", 00:16:53.890 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:53.890 "is_configured": true, 00:16:53.890 "data_offset": 0, 00:16:53.890 "data_size": 65536 00:16:53.890 }, 00:16:53.890 { 00:16:53.890 "name": "BaseBdev3", 00:16:53.890 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:53.890 "is_configured": true, 00:16:53.890 "data_offset": 0, 00:16:53.890 "data_size": 65536 00:16:53.890 }, 00:16:53.890 { 00:16:53.890 "name": "BaseBdev4", 00:16:53.890 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:53.890 "is_configured": true, 00:16:53.890 "data_offset": 0, 00:16:53.890 "data_size": 65536 00:16:53.890 } 00:16:53.890 ] 00:16:53.890 }' 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.890 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.891 [2024-11-18 13:33:23.840748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.891 [2024-11-18 13:33:23.854591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:53.891 13:33:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.891 13:33:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:53.891 [2024-11-18 13:33:23.863403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.844 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.844 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.845 13:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.105 "name": "raid_bdev1", 00:16:55.105 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:55.105 "strip_size_kb": 64, 00:16:55.105 "state": "online", 00:16:55.105 "raid_level": "raid5f", 00:16:55.105 "superblock": false, 00:16:55.105 "num_base_bdevs": 4, 00:16:55.105 "num_base_bdevs_discovered": 4, 00:16:55.105 "num_base_bdevs_operational": 4, 00:16:55.105 "process": { 00:16:55.105 "type": "rebuild", 00:16:55.105 "target": "spare", 00:16:55.105 "progress": { 00:16:55.105 "blocks": 19200, 00:16:55.105 "percent": 9 00:16:55.105 } 00:16:55.105 }, 00:16:55.105 "base_bdevs_list": [ 00:16:55.105 { 00:16:55.105 "name": "spare", 00:16:55.105 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev2", 00:16:55.105 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev3", 00:16:55.105 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev4", 00:16:55.105 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 } 00:16:55.105 ] 00:16:55.105 }' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=618 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.105 13:33:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.105 "name": "raid_bdev1", 00:16:55.105 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:55.105 "strip_size_kb": 64, 00:16:55.105 "state": "online", 00:16:55.105 "raid_level": "raid5f", 00:16:55.105 "superblock": false, 00:16:55.105 "num_base_bdevs": 4, 00:16:55.105 "num_base_bdevs_discovered": 4, 00:16:55.105 "num_base_bdevs_operational": 4, 00:16:55.105 "process": { 00:16:55.105 "type": "rebuild", 00:16:55.105 "target": "spare", 00:16:55.105 "progress": { 00:16:55.105 "blocks": 21120, 00:16:55.105 "percent": 10 00:16:55.105 } 00:16:55.105 }, 00:16:55.105 "base_bdevs_list": [ 00:16:55.105 { 00:16:55.105 "name": "spare", 00:16:55.105 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev2", 00:16:55.105 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev3", 00:16:55.105 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 }, 00:16:55.105 { 00:16:55.105 "name": "BaseBdev4", 00:16:55.105 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:55.105 "is_configured": true, 00:16:55.105 "data_offset": 0, 00:16:55.105 "data_size": 65536 00:16:55.105 } 00:16:55.105 ] 00:16:55.105 }' 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.105 13:33:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.490 "name": "raid_bdev1", 00:16:56.490 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:56.490 "strip_size_kb": 64, 00:16:56.490 "state": "online", 00:16:56.490 "raid_level": "raid5f", 00:16:56.490 "superblock": false, 00:16:56.490 "num_base_bdevs": 4, 00:16:56.490 "num_base_bdevs_discovered": 4, 00:16:56.490 "num_base_bdevs_operational": 4, 00:16:56.490 "process": { 00:16:56.490 "type": "rebuild", 00:16:56.490 "target": "spare", 00:16:56.490 "progress": { 00:16:56.490 "blocks": 42240, 00:16:56.490 "percent": 21 00:16:56.490 } 00:16:56.490 }, 00:16:56.490 "base_bdevs_list": [ 00:16:56.490 { 00:16:56.490 "name": "spare", 00:16:56.490 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:56.490 "is_configured": true, 00:16:56.490 "data_offset": 0, 00:16:56.490 "data_size": 65536 00:16:56.490 }, 00:16:56.490 { 00:16:56.490 "name": "BaseBdev2", 00:16:56.490 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:56.490 "is_configured": true, 00:16:56.490 "data_offset": 0, 00:16:56.490 "data_size": 65536 00:16:56.490 }, 00:16:56.490 { 00:16:56.490 "name": "BaseBdev3", 00:16:56.490 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:56.490 "is_configured": true, 00:16:56.490 "data_offset": 0, 00:16:56.490 "data_size": 65536 00:16:56.490 }, 00:16:56.490 { 00:16:56.490 "name": "BaseBdev4", 00:16:56.490 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:56.490 "is_configured": true, 00:16:56.490 "data_offset": 0, 00:16:56.490 "data_size": 65536 00:16:56.490 } 00:16:56.490 ] 00:16:56.490 }' 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.490 13:33:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.430 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.430 "name": "raid_bdev1", 00:16:57.430 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:57.430 "strip_size_kb": 64, 00:16:57.430 "state": "online", 00:16:57.430 "raid_level": "raid5f", 00:16:57.430 "superblock": false, 00:16:57.430 "num_base_bdevs": 4, 00:16:57.430 "num_base_bdevs_discovered": 4, 00:16:57.430 "num_base_bdevs_operational": 4, 00:16:57.430 "process": { 00:16:57.430 "type": "rebuild", 00:16:57.430 "target": "spare", 00:16:57.430 "progress": { 00:16:57.430 "blocks": 65280, 00:16:57.430 "percent": 33 00:16:57.430 } 00:16:57.430 }, 00:16:57.430 "base_bdevs_list": [ 00:16:57.430 { 00:16:57.430 "name": "spare", 00:16:57.430 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:57.430 "is_configured": true, 00:16:57.430 "data_offset": 0, 00:16:57.430 "data_size": 65536 00:16:57.430 }, 00:16:57.430 { 00:16:57.430 "name": "BaseBdev2", 00:16:57.430 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:57.430 "is_configured": true, 00:16:57.430 "data_offset": 0, 00:16:57.430 "data_size": 65536 00:16:57.430 }, 00:16:57.430 { 00:16:57.430 "name": "BaseBdev3", 00:16:57.430 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:57.430 "is_configured": true, 00:16:57.430 "data_offset": 0, 00:16:57.431 "data_size": 65536 00:16:57.431 }, 00:16:57.431 { 00:16:57.431 "name": "BaseBdev4", 00:16:57.431 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:57.431 "is_configured": true, 00:16:57.431 "data_offset": 0, 00:16:57.431 "data_size": 65536 00:16:57.431 } 00:16:57.431 ] 00:16:57.431 }' 00:16:57.431 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.431 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.431 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.431 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.431 13:33:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.371 13:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.631 "name": "raid_bdev1", 00:16:58.631 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:58.631 "strip_size_kb": 64, 00:16:58.631 "state": "online", 00:16:58.631 "raid_level": "raid5f", 00:16:58.631 "superblock": false, 00:16:58.631 "num_base_bdevs": 4, 00:16:58.631 "num_base_bdevs_discovered": 4, 00:16:58.631 "num_base_bdevs_operational": 4, 00:16:58.631 "process": { 00:16:58.631 "type": "rebuild", 00:16:58.631 "target": "spare", 00:16:58.631 "progress": { 00:16:58.631 "blocks": 86400, 00:16:58.631 "percent": 43 00:16:58.631 } 00:16:58.631 }, 00:16:58.631 "base_bdevs_list": [ 00:16:58.631 { 00:16:58.631 "name": "spare", 00:16:58.631 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:58.631 "is_configured": true, 00:16:58.631 "data_offset": 0, 00:16:58.631 "data_size": 65536 00:16:58.631 }, 00:16:58.631 { 00:16:58.631 "name": "BaseBdev2", 00:16:58.631 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:58.631 "is_configured": true, 00:16:58.631 "data_offset": 0, 00:16:58.631 "data_size": 65536 00:16:58.631 }, 00:16:58.631 { 00:16:58.631 "name": "BaseBdev3", 00:16:58.631 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:58.631 "is_configured": true, 00:16:58.631 "data_offset": 0, 00:16:58.631 "data_size": 65536 00:16:58.631 }, 00:16:58.631 { 00:16:58.631 "name": "BaseBdev4", 00:16:58.631 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:58.631 "is_configured": true, 00:16:58.631 "data_offset": 0, 00:16:58.631 "data_size": 65536 00:16:58.631 } 00:16:58.631 ] 00:16:58.631 }' 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.631 13:33:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.571 "name": "raid_bdev1", 00:16:59.571 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:16:59.571 "strip_size_kb": 64, 00:16:59.571 "state": "online", 00:16:59.571 "raid_level": "raid5f", 00:16:59.571 "superblock": false, 00:16:59.571 "num_base_bdevs": 4, 00:16:59.571 "num_base_bdevs_discovered": 4, 00:16:59.571 "num_base_bdevs_operational": 4, 00:16:59.571 "process": { 00:16:59.571 "type": "rebuild", 00:16:59.571 "target": "spare", 00:16:59.571 "progress": { 00:16:59.571 "blocks": 107520, 00:16:59.571 "percent": 54 00:16:59.571 } 00:16:59.571 }, 00:16:59.571 "base_bdevs_list": [ 00:16:59.571 { 00:16:59.571 "name": "spare", 00:16:59.571 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:16:59.571 "is_configured": true, 00:16:59.571 "data_offset": 0, 00:16:59.571 "data_size": 65536 00:16:59.571 }, 00:16:59.571 { 00:16:59.571 "name": "BaseBdev2", 00:16:59.571 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:16:59.571 "is_configured": true, 00:16:59.571 "data_offset": 0, 00:16:59.571 "data_size": 65536 00:16:59.571 }, 00:16:59.571 { 00:16:59.571 "name": "BaseBdev3", 00:16:59.571 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:16:59.571 "is_configured": true, 00:16:59.571 "data_offset": 0, 00:16:59.571 "data_size": 65536 00:16:59.571 }, 00:16:59.571 { 00:16:59.571 "name": "BaseBdev4", 00:16:59.571 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:16:59.571 "is_configured": true, 00:16:59.571 "data_offset": 0, 00:16:59.571 "data_size": 65536 00:16:59.571 } 00:16:59.571 ] 00:16:59.571 }' 00:16:59.571 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.830 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.830 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.830 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.830 13:33:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.769 "name": "raid_bdev1", 00:17:00.769 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:00.769 "strip_size_kb": 64, 00:17:00.769 "state": "online", 00:17:00.769 "raid_level": "raid5f", 00:17:00.769 "superblock": false, 00:17:00.769 "num_base_bdevs": 4, 00:17:00.769 "num_base_bdevs_discovered": 4, 00:17:00.769 "num_base_bdevs_operational": 4, 00:17:00.769 "process": { 00:17:00.769 "type": "rebuild", 00:17:00.769 "target": "spare", 00:17:00.769 "progress": { 00:17:00.769 "blocks": 130560, 00:17:00.769 "percent": 66 00:17:00.769 } 00:17:00.769 }, 00:17:00.769 "base_bdevs_list": [ 00:17:00.769 { 00:17:00.769 "name": "spare", 00:17:00.769 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 0, 00:17:00.769 "data_size": 65536 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "name": "BaseBdev2", 00:17:00.769 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 0, 00:17:00.769 "data_size": 65536 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "name": "BaseBdev3", 00:17:00.769 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 0, 00:17:00.769 "data_size": 65536 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "name": "BaseBdev4", 00:17:00.769 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 0, 00:17:00.769 "data_size": 65536 00:17:00.769 } 00:17:00.769 ] 00:17:00.769 }' 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.769 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.029 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.029 13:33:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.968 "name": "raid_bdev1", 00:17:01.968 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:01.968 "strip_size_kb": 64, 00:17:01.968 "state": "online", 00:17:01.968 "raid_level": "raid5f", 00:17:01.968 "superblock": false, 00:17:01.968 "num_base_bdevs": 4, 00:17:01.968 "num_base_bdevs_discovered": 4, 00:17:01.968 "num_base_bdevs_operational": 4, 00:17:01.968 "process": { 00:17:01.968 "type": "rebuild", 00:17:01.968 "target": "spare", 00:17:01.968 "progress": { 00:17:01.968 "blocks": 151680, 00:17:01.968 "percent": 77 00:17:01.968 } 00:17:01.968 }, 00:17:01.968 "base_bdevs_list": [ 00:17:01.968 { 00:17:01.968 "name": "spare", 00:17:01.968 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:01.968 "is_configured": true, 00:17:01.968 "data_offset": 0, 00:17:01.968 "data_size": 65536 00:17:01.968 }, 00:17:01.968 { 00:17:01.968 "name": "BaseBdev2", 00:17:01.968 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:01.968 "is_configured": true, 00:17:01.968 "data_offset": 0, 00:17:01.968 "data_size": 65536 00:17:01.968 }, 00:17:01.968 { 00:17:01.968 "name": "BaseBdev3", 00:17:01.968 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:01.968 "is_configured": true, 00:17:01.968 "data_offset": 0, 00:17:01.968 "data_size": 65536 00:17:01.968 }, 00:17:01.968 { 00:17:01.968 "name": "BaseBdev4", 00:17:01.968 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:01.968 "is_configured": true, 00:17:01.968 "data_offset": 0, 00:17:01.968 "data_size": 65536 00:17:01.968 } 00:17:01.968 ] 00:17:01.968 }' 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.968 13:33:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.968 13:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.968 13:33:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.348 "name": "raid_bdev1", 00:17:03.348 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:03.348 "strip_size_kb": 64, 00:17:03.348 "state": "online", 00:17:03.348 "raid_level": "raid5f", 00:17:03.348 "superblock": false, 00:17:03.348 "num_base_bdevs": 4, 00:17:03.348 "num_base_bdevs_discovered": 4, 00:17:03.348 "num_base_bdevs_operational": 4, 00:17:03.348 "process": { 00:17:03.348 "type": "rebuild", 00:17:03.348 "target": "spare", 00:17:03.348 "progress": { 00:17:03.348 "blocks": 174720, 00:17:03.348 "percent": 88 00:17:03.348 } 00:17:03.348 }, 00:17:03.348 "base_bdevs_list": [ 00:17:03.348 { 00:17:03.348 "name": "spare", 00:17:03.348 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:03.348 "is_configured": true, 00:17:03.348 "data_offset": 0, 00:17:03.348 "data_size": 65536 00:17:03.348 }, 00:17:03.348 { 00:17:03.348 "name": "BaseBdev2", 00:17:03.348 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:03.348 "is_configured": true, 00:17:03.348 "data_offset": 0, 00:17:03.348 "data_size": 65536 00:17:03.348 }, 00:17:03.348 { 00:17:03.348 "name": "BaseBdev3", 00:17:03.348 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:03.348 "is_configured": true, 00:17:03.348 "data_offset": 0, 00:17:03.348 "data_size": 65536 00:17:03.348 }, 00:17:03.348 { 00:17:03.348 "name": "BaseBdev4", 00:17:03.348 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:03.348 "is_configured": true, 00:17:03.348 "data_offset": 0, 00:17:03.348 "data_size": 65536 00:17:03.348 } 00:17:03.348 ] 00:17:03.348 }' 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.348 13:33:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.284 [2024-11-18 13:33:34.205007] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:04.284 [2024-11-18 13:33:34.205087] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:04.284 [2024-11-18 13:33:34.205142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.284 "name": "raid_bdev1", 00:17:04.284 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:04.284 "strip_size_kb": 64, 00:17:04.284 "state": "online", 00:17:04.284 "raid_level": "raid5f", 00:17:04.284 "superblock": false, 00:17:04.284 "num_base_bdevs": 4, 00:17:04.284 "num_base_bdevs_discovered": 4, 00:17:04.284 "num_base_bdevs_operational": 4, 00:17:04.284 "process": { 00:17:04.284 "type": "rebuild", 00:17:04.284 "target": "spare", 00:17:04.284 "progress": { 00:17:04.284 "blocks": 195840, 00:17:04.284 "percent": 99 00:17:04.284 } 00:17:04.284 }, 00:17:04.284 "base_bdevs_list": [ 00:17:04.284 { 00:17:04.284 "name": "spare", 00:17:04.284 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:04.284 "is_configured": true, 00:17:04.284 "data_offset": 0, 00:17:04.284 "data_size": 65536 00:17:04.284 }, 00:17:04.284 { 00:17:04.284 "name": "BaseBdev2", 00:17:04.284 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:04.284 "is_configured": true, 00:17:04.284 "data_offset": 0, 00:17:04.284 "data_size": 65536 00:17:04.284 }, 00:17:04.284 { 00:17:04.284 "name": "BaseBdev3", 00:17:04.284 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:04.284 "is_configured": true, 00:17:04.284 "data_offset": 0, 00:17:04.284 "data_size": 65536 00:17:04.284 }, 00:17:04.284 { 00:17:04.284 "name": "BaseBdev4", 00:17:04.284 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:04.284 "is_configured": true, 00:17:04.284 "data_offset": 0, 00:17:04.284 "data_size": 65536 00:17:04.284 } 00:17:04.284 ] 00:17:04.284 }' 00:17:04.284 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.285 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.285 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.285 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.285 13:33:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.666 "name": "raid_bdev1", 00:17:05.666 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:05.666 "strip_size_kb": 64, 00:17:05.666 "state": "online", 00:17:05.666 "raid_level": "raid5f", 00:17:05.666 "superblock": false, 00:17:05.666 "num_base_bdevs": 4, 00:17:05.666 "num_base_bdevs_discovered": 4, 00:17:05.666 "num_base_bdevs_operational": 4, 00:17:05.666 "base_bdevs_list": [ 00:17:05.666 { 00:17:05.666 "name": "spare", 00:17:05.666 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:05.666 "is_configured": true, 00:17:05.666 "data_offset": 0, 00:17:05.666 "data_size": 65536 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "name": "BaseBdev2", 00:17:05.666 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:05.666 "is_configured": true, 00:17:05.666 "data_offset": 0, 00:17:05.666 "data_size": 65536 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "name": "BaseBdev3", 00:17:05.666 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:05.666 "is_configured": true, 00:17:05.666 "data_offset": 0, 00:17:05.666 "data_size": 65536 00:17:05.666 }, 00:17:05.666 { 00:17:05.666 "name": "BaseBdev4", 00:17:05.666 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:05.666 "is_configured": true, 00:17:05.666 "data_offset": 0, 00:17:05.666 "data_size": 65536 00:17:05.666 } 00:17:05.666 ] 00:17:05.666 }' 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.666 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.667 "name": "raid_bdev1", 00:17:05.667 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:05.667 "strip_size_kb": 64, 00:17:05.667 "state": "online", 00:17:05.667 "raid_level": "raid5f", 00:17:05.667 "superblock": false, 00:17:05.667 "num_base_bdevs": 4, 00:17:05.667 "num_base_bdevs_discovered": 4, 00:17:05.667 "num_base_bdevs_operational": 4, 00:17:05.667 "base_bdevs_list": [ 00:17:05.667 { 00:17:05.667 "name": "spare", 00:17:05.667 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev2", 00:17:05.667 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev3", 00:17:05.667 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev4", 00:17:05.667 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 } 00:17:05.667 ] 00:17:05.667 }' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.667 "name": "raid_bdev1", 00:17:05.667 "uuid": "c6a0d167-020e-4baf-81be-3d226779f819", 00:17:05.667 "strip_size_kb": 64, 00:17:05.667 "state": "online", 00:17:05.667 "raid_level": "raid5f", 00:17:05.667 "superblock": false, 00:17:05.667 "num_base_bdevs": 4, 00:17:05.667 "num_base_bdevs_discovered": 4, 00:17:05.667 "num_base_bdevs_operational": 4, 00:17:05.667 "base_bdevs_list": [ 00:17:05.667 { 00:17:05.667 "name": "spare", 00:17:05.667 "uuid": "057899c3-b3ac-56c4-9e5b-9af9f1025a59", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev2", 00:17:05.667 "uuid": "4c307a2e-c41a-5361-a49e-d55657d1fd85", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev3", 00:17:05.667 "uuid": "5466a837-4f41-5e67-939a-a0d710da756a", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 }, 00:17:05.667 { 00:17:05.667 "name": "BaseBdev4", 00:17:05.667 "uuid": "c2272b1d-f693-584b-a25f-031d7c5ad083", 00:17:05.667 "is_configured": true, 00:17:05.667 "data_offset": 0, 00:17:05.667 "data_size": 65536 00:17:05.667 } 00:17:05.667 ] 00:17:05.667 }' 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.667 13:33:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.237 [2024-11-18 13:33:36.083064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.237 [2024-11-18 13:33:36.083098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.237 [2024-11-18 13:33:36.083183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.237 [2024-11-18 13:33:36.083275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.237 [2024-11-18 13:33:36.083289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.237 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:06.497 /dev/nbd0 00:17:06.497 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:06.497 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:06.497 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:06.497 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:06.497 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.498 1+0 records in 00:17:06.498 1+0 records out 00:17:06.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426411 s, 9.6 MB/s 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.498 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:06.758 /dev/nbd1 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.758 1+0 records in 00:17:06.758 1+0 records out 00:17:06.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422133 s, 9.7 MB/s 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.758 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.018 13:33:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.018 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.018 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.018 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.018 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.018 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.019 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.019 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.019 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.019 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.019 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84538 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84538 ']' 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84538 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84538 00:17:07.279 killing process with pid 84538 00:17:07.279 Received shutdown signal, test time was about 60.000000 seconds 00:17:07.279 00:17:07.279 Latency(us) 00:17:07.279 [2024-11-18T13:33:37.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.279 [2024-11-18T13:33:37.333Z] =================================================================================================================== 00:17:07.279 [2024-11-18T13:33:37.333Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84538' 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84538 00:17:07.279 [2024-11-18 13:33:37.253201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.279 13:33:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84538 00:17:07.849 [2024-11-18 13:33:37.717017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:08.789 00:17:08.789 real 0m19.940s 00:17:08.789 user 0m23.889s 00:17:08.789 sys 0m2.263s 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.789 ************************************ 00:17:08.789 END TEST raid5f_rebuild_test 00:17:08.789 ************************************ 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 13:33:38 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:08.789 13:33:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:08.789 13:33:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.789 13:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.789 ************************************ 00:17:08.789 START TEST raid5f_rebuild_test_sb 00:17:08.789 ************************************ 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:08.789 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85060 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85060 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85060 ']' 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.790 13:33:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:09.050 Zero copy mechanism will not be used. 00:17:09.050 [2024-11-18 13:33:38.930011] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:09.050 [2024-11-18 13:33:38.930138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85060 ] 00:17:09.314 [2024-11-18 13:33:39.110846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.314 [2024-11-18 13:33:39.211668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.576 [2024-11-18 13:33:39.391607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.576 [2024-11-18 13:33:39.391646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.836 BaseBdev1_malloc 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.836 [2024-11-18 13:33:39.763903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:09.836 [2024-11-18 13:33:39.763966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.836 [2024-11-18 13:33:39.763991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:09.836 [2024-11-18 13:33:39.764002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.836 [2024-11-18 13:33:39.766001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.836 [2024-11-18 13:33:39.766037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:09.836 BaseBdev1 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.836 BaseBdev2_malloc 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.836 [2024-11-18 13:33:39.816447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:09.836 [2024-11-18 13:33:39.816501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.836 [2024-11-18 13:33:39.816521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:09.836 [2024-11-18 13:33:39.816533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.836 [2024-11-18 13:33:39.818426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.836 [2024-11-18 13:33:39.818511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:09.836 BaseBdev2 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.836 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 BaseBdev3_malloc 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 [2024-11-18 13:33:39.903450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:10.098 [2024-11-18 13:33:39.903498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.098 [2024-11-18 13:33:39.903519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:10.098 [2024-11-18 13:33:39.903530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.098 [2024-11-18 13:33:39.905479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.098 [2024-11-18 13:33:39.905513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:10.098 BaseBdev3 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 BaseBdev4_malloc 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 [2024-11-18 13:33:39.955909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:10.098 [2024-11-18 13:33:39.955955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.098 [2024-11-18 13:33:39.955974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:10.098 [2024-11-18 13:33:39.955984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.098 [2024-11-18 13:33:39.957934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.098 [2024-11-18 13:33:39.957974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:10.098 BaseBdev4 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 spare_malloc 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 spare_delay 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 [2024-11-18 13:33:40.020090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:10.098 [2024-11-18 13:33:40.020153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.098 [2024-11-18 13:33:40.020173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:10.098 [2024-11-18 13:33:40.020184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.098 [2024-11-18 13:33:40.022106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.098 [2024-11-18 13:33:40.022167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:10.098 spare 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 [2024-11-18 13:33:40.032123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.098 [2024-11-18 13:33:40.033846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.098 [2024-11-18 13:33:40.033924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.098 [2024-11-18 13:33:40.033972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.098 [2024-11-18 13:33:40.034167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:10.098 [2024-11-18 13:33:40.034190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:10.098 [2024-11-18 13:33:40.034407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:10.098 [2024-11-18 13:33:40.041622] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:10.098 [2024-11-18 13:33:40.041645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:10.098 [2024-11-18 13:33:40.041837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.098 "name": "raid_bdev1", 00:17:10.098 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:10.098 "strip_size_kb": 64, 00:17:10.098 "state": "online", 00:17:10.098 "raid_level": "raid5f", 00:17:10.098 "superblock": true, 00:17:10.098 "num_base_bdevs": 4, 00:17:10.098 "num_base_bdevs_discovered": 4, 00:17:10.098 "num_base_bdevs_operational": 4, 00:17:10.098 "base_bdevs_list": [ 00:17:10.098 { 00:17:10.098 "name": "BaseBdev1", 00:17:10.098 "uuid": "91a6909f-d0d5-5311-b600-ffdc3f4a13d5", 00:17:10.098 "is_configured": true, 00:17:10.098 "data_offset": 2048, 00:17:10.098 "data_size": 63488 00:17:10.098 }, 00:17:10.098 { 00:17:10.098 "name": "BaseBdev2", 00:17:10.098 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:10.098 "is_configured": true, 00:17:10.098 "data_offset": 2048, 00:17:10.098 "data_size": 63488 00:17:10.098 }, 00:17:10.098 { 00:17:10.098 "name": "BaseBdev3", 00:17:10.098 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:10.098 "is_configured": true, 00:17:10.098 "data_offset": 2048, 00:17:10.098 "data_size": 63488 00:17:10.098 }, 00:17:10.098 { 00:17:10.098 "name": "BaseBdev4", 00:17:10.098 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:10.098 "is_configured": true, 00:17:10.098 "data_offset": 2048, 00:17:10.098 "data_size": 63488 00:17:10.098 } 00:17:10.098 ] 00:17:10.098 }' 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.098 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.669 [2024-11-18 13:33:40.509094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.669 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:10.929 [2024-11-18 13:33:40.788446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:10.929 /dev/nbd0 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:10.929 1+0 records in 00:17:10.929 1+0 records out 00:17:10.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00346963 s, 1.2 MB/s 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:10.929 13:33:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:11.499 496+0 records in 00:17:11.499 496+0 records out 00:17:11.499 97517568 bytes (98 MB, 93 MiB) copied, 0.439567 s, 222 MB/s 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.499 [2024-11-18 13:33:41.514244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:11.499 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.500 [2024-11-18 13:33:41.530992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.500 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.759 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.759 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.759 "name": "raid_bdev1", 00:17:11.759 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:11.759 "strip_size_kb": 64, 00:17:11.760 "state": "online", 00:17:11.760 "raid_level": "raid5f", 00:17:11.760 "superblock": true, 00:17:11.760 "num_base_bdevs": 4, 00:17:11.760 "num_base_bdevs_discovered": 3, 00:17:11.760 "num_base_bdevs_operational": 3, 00:17:11.760 "base_bdevs_list": [ 00:17:11.760 { 00:17:11.760 "name": null, 00:17:11.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.760 "is_configured": false, 00:17:11.760 "data_offset": 0, 00:17:11.760 "data_size": 63488 00:17:11.760 }, 00:17:11.760 { 00:17:11.760 "name": "BaseBdev2", 00:17:11.760 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:11.760 "is_configured": true, 00:17:11.760 "data_offset": 2048, 00:17:11.760 "data_size": 63488 00:17:11.760 }, 00:17:11.760 { 00:17:11.760 "name": "BaseBdev3", 00:17:11.760 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:11.760 "is_configured": true, 00:17:11.760 "data_offset": 2048, 00:17:11.760 "data_size": 63488 00:17:11.760 }, 00:17:11.760 { 00:17:11.760 "name": "BaseBdev4", 00:17:11.760 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:11.760 "is_configured": true, 00:17:11.760 "data_offset": 2048, 00:17:11.760 "data_size": 63488 00:17:11.760 } 00:17:11.760 ] 00:17:11.760 }' 00:17:11.760 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.760 13:33:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.019 13:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:12.019 13:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.019 13:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.019 [2024-11-18 13:33:42.006155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.019 [2024-11-18 13:33:42.020768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:12.019 13:33:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.019 13:33:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:12.019 [2024-11-18 13:33:42.029775] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.401 "name": "raid_bdev1", 00:17:13.401 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:13.401 "strip_size_kb": 64, 00:17:13.401 "state": "online", 00:17:13.401 "raid_level": "raid5f", 00:17:13.401 "superblock": true, 00:17:13.401 "num_base_bdevs": 4, 00:17:13.401 "num_base_bdevs_discovered": 4, 00:17:13.401 "num_base_bdevs_operational": 4, 00:17:13.401 "process": { 00:17:13.401 "type": "rebuild", 00:17:13.401 "target": "spare", 00:17:13.401 "progress": { 00:17:13.401 "blocks": 19200, 00:17:13.401 "percent": 10 00:17:13.401 } 00:17:13.401 }, 00:17:13.401 "base_bdevs_list": [ 00:17:13.401 { 00:17:13.401 "name": "spare", 00:17:13.401 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:13.401 "is_configured": true, 00:17:13.401 "data_offset": 2048, 00:17:13.401 "data_size": 63488 00:17:13.401 }, 00:17:13.401 { 00:17:13.401 "name": "BaseBdev2", 00:17:13.401 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:13.401 "is_configured": true, 00:17:13.401 "data_offset": 2048, 00:17:13.401 "data_size": 63488 00:17:13.401 }, 00:17:13.401 { 00:17:13.401 "name": "BaseBdev3", 00:17:13.401 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:13.401 "is_configured": true, 00:17:13.401 "data_offset": 2048, 00:17:13.401 "data_size": 63488 00:17:13.401 }, 00:17:13.401 { 00:17:13.401 "name": "BaseBdev4", 00:17:13.401 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:13.401 "is_configured": true, 00:17:13.401 "data_offset": 2048, 00:17:13.401 "data_size": 63488 00:17:13.401 } 00:17:13.401 ] 00:17:13.401 }' 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.401 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.401 [2024-11-18 13:33:43.160452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.402 [2024-11-18 13:33:43.235414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.402 [2024-11-18 13:33:43.235479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.402 [2024-11-18 13:33:43.235495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.402 [2024-11-18 13:33:43.235504] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.402 "name": "raid_bdev1", 00:17:13.402 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:13.402 "strip_size_kb": 64, 00:17:13.402 "state": "online", 00:17:13.402 "raid_level": "raid5f", 00:17:13.402 "superblock": true, 00:17:13.402 "num_base_bdevs": 4, 00:17:13.402 "num_base_bdevs_discovered": 3, 00:17:13.402 "num_base_bdevs_operational": 3, 00:17:13.402 "base_bdevs_list": [ 00:17:13.402 { 00:17:13.402 "name": null, 00:17:13.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.402 "is_configured": false, 00:17:13.402 "data_offset": 0, 00:17:13.402 "data_size": 63488 00:17:13.402 }, 00:17:13.402 { 00:17:13.402 "name": "BaseBdev2", 00:17:13.402 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:13.402 "is_configured": true, 00:17:13.402 "data_offset": 2048, 00:17:13.402 "data_size": 63488 00:17:13.402 }, 00:17:13.402 { 00:17:13.402 "name": "BaseBdev3", 00:17:13.402 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:13.402 "is_configured": true, 00:17:13.402 "data_offset": 2048, 00:17:13.402 "data_size": 63488 00:17:13.402 }, 00:17:13.402 { 00:17:13.402 "name": "BaseBdev4", 00:17:13.402 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:13.402 "is_configured": true, 00:17:13.402 "data_offset": 2048, 00:17:13.402 "data_size": 63488 00:17:13.402 } 00:17:13.402 ] 00:17:13.402 }' 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.402 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.662 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.921 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.921 "name": "raid_bdev1", 00:17:13.921 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:13.921 "strip_size_kb": 64, 00:17:13.921 "state": "online", 00:17:13.922 "raid_level": "raid5f", 00:17:13.922 "superblock": true, 00:17:13.922 "num_base_bdevs": 4, 00:17:13.922 "num_base_bdevs_discovered": 3, 00:17:13.922 "num_base_bdevs_operational": 3, 00:17:13.922 "base_bdevs_list": [ 00:17:13.922 { 00:17:13.922 "name": null, 00:17:13.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.922 "is_configured": false, 00:17:13.922 "data_offset": 0, 00:17:13.922 "data_size": 63488 00:17:13.922 }, 00:17:13.922 { 00:17:13.922 "name": "BaseBdev2", 00:17:13.922 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:13.922 "is_configured": true, 00:17:13.922 "data_offset": 2048, 00:17:13.922 "data_size": 63488 00:17:13.922 }, 00:17:13.922 { 00:17:13.922 "name": "BaseBdev3", 00:17:13.922 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:13.922 "is_configured": true, 00:17:13.922 "data_offset": 2048, 00:17:13.922 "data_size": 63488 00:17:13.922 }, 00:17:13.922 { 00:17:13.922 "name": "BaseBdev4", 00:17:13.922 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:13.922 "is_configured": true, 00:17:13.922 "data_offset": 2048, 00:17:13.922 "data_size": 63488 00:17:13.922 } 00:17:13.922 ] 00:17:13.922 }' 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.922 [2024-11-18 13:33:43.843049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.922 [2024-11-18 13:33:43.857400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.922 13:33:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:13.922 [2024-11-18 13:33:43.865759] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.908 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.908 "name": "raid_bdev1", 00:17:14.908 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:14.908 "strip_size_kb": 64, 00:17:14.908 "state": "online", 00:17:14.908 "raid_level": "raid5f", 00:17:14.908 "superblock": true, 00:17:14.908 "num_base_bdevs": 4, 00:17:14.908 "num_base_bdevs_discovered": 4, 00:17:14.908 "num_base_bdevs_operational": 4, 00:17:14.908 "process": { 00:17:14.908 "type": "rebuild", 00:17:14.908 "target": "spare", 00:17:14.908 "progress": { 00:17:14.908 "blocks": 19200, 00:17:14.908 "percent": 10 00:17:14.908 } 00:17:14.908 }, 00:17:14.908 "base_bdevs_list": [ 00:17:14.908 { 00:17:14.908 "name": "spare", 00:17:14.908 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:14.908 "is_configured": true, 00:17:14.908 "data_offset": 2048, 00:17:14.908 "data_size": 63488 00:17:14.908 }, 00:17:14.908 { 00:17:14.909 "name": "BaseBdev2", 00:17:14.909 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:14.909 "is_configured": true, 00:17:14.909 "data_offset": 2048, 00:17:14.909 "data_size": 63488 00:17:14.909 }, 00:17:14.909 { 00:17:14.909 "name": "BaseBdev3", 00:17:14.909 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:14.909 "is_configured": true, 00:17:14.909 "data_offset": 2048, 00:17:14.909 "data_size": 63488 00:17:14.909 }, 00:17:14.909 { 00:17:14.909 "name": "BaseBdev4", 00:17:14.909 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:14.909 "is_configured": true, 00:17:14.909 "data_offset": 2048, 00:17:14.909 "data_size": 63488 00:17:14.909 } 00:17:14.909 ] 00:17:14.909 }' 00:17:14.909 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.178 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.178 13:33:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:15.178 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=639 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.178 "name": "raid_bdev1", 00:17:15.178 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:15.178 "strip_size_kb": 64, 00:17:15.178 "state": "online", 00:17:15.178 "raid_level": "raid5f", 00:17:15.178 "superblock": true, 00:17:15.178 "num_base_bdevs": 4, 00:17:15.178 "num_base_bdevs_discovered": 4, 00:17:15.178 "num_base_bdevs_operational": 4, 00:17:15.178 "process": { 00:17:15.178 "type": "rebuild", 00:17:15.178 "target": "spare", 00:17:15.178 "progress": { 00:17:15.178 "blocks": 21120, 00:17:15.178 "percent": 11 00:17:15.178 } 00:17:15.178 }, 00:17:15.178 "base_bdevs_list": [ 00:17:15.178 { 00:17:15.178 "name": "spare", 00:17:15.178 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev2", 00:17:15.178 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev3", 00:17:15.178 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 }, 00:17:15.178 { 00:17:15.178 "name": "BaseBdev4", 00:17:15.178 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:15.178 "is_configured": true, 00:17:15.178 "data_offset": 2048, 00:17:15.178 "data_size": 63488 00:17:15.178 } 00:17:15.178 ] 00:17:15.178 }' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.178 13:33:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.117 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.377 "name": "raid_bdev1", 00:17:16.377 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:16.377 "strip_size_kb": 64, 00:17:16.377 "state": "online", 00:17:16.377 "raid_level": "raid5f", 00:17:16.377 "superblock": true, 00:17:16.377 "num_base_bdevs": 4, 00:17:16.377 "num_base_bdevs_discovered": 4, 00:17:16.377 "num_base_bdevs_operational": 4, 00:17:16.377 "process": { 00:17:16.377 "type": "rebuild", 00:17:16.377 "target": "spare", 00:17:16.377 "progress": { 00:17:16.377 "blocks": 42240, 00:17:16.377 "percent": 22 00:17:16.377 } 00:17:16.377 }, 00:17:16.377 "base_bdevs_list": [ 00:17:16.377 { 00:17:16.377 "name": "spare", 00:17:16.377 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:16.377 "is_configured": true, 00:17:16.377 "data_offset": 2048, 00:17:16.377 "data_size": 63488 00:17:16.377 }, 00:17:16.377 { 00:17:16.377 "name": "BaseBdev2", 00:17:16.377 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:16.377 "is_configured": true, 00:17:16.377 "data_offset": 2048, 00:17:16.377 "data_size": 63488 00:17:16.377 }, 00:17:16.377 { 00:17:16.377 "name": "BaseBdev3", 00:17:16.377 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:16.377 "is_configured": true, 00:17:16.377 "data_offset": 2048, 00:17:16.377 "data_size": 63488 00:17:16.377 }, 00:17:16.377 { 00:17:16.377 "name": "BaseBdev4", 00:17:16.377 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:16.377 "is_configured": true, 00:17:16.377 "data_offset": 2048, 00:17:16.377 "data_size": 63488 00:17:16.377 } 00:17:16.377 ] 00:17:16.377 }' 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.377 13:33:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.317 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.317 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.318 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.318 "name": "raid_bdev1", 00:17:17.318 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:17.318 "strip_size_kb": 64, 00:17:17.318 "state": "online", 00:17:17.318 "raid_level": "raid5f", 00:17:17.318 "superblock": true, 00:17:17.318 "num_base_bdevs": 4, 00:17:17.318 "num_base_bdevs_discovered": 4, 00:17:17.318 "num_base_bdevs_operational": 4, 00:17:17.318 "process": { 00:17:17.318 "type": "rebuild", 00:17:17.318 "target": "spare", 00:17:17.318 "progress": { 00:17:17.318 "blocks": 65280, 00:17:17.318 "percent": 34 00:17:17.318 } 00:17:17.318 }, 00:17:17.318 "base_bdevs_list": [ 00:17:17.318 { 00:17:17.318 "name": "spare", 00:17:17.318 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:17.318 "is_configured": true, 00:17:17.318 "data_offset": 2048, 00:17:17.318 "data_size": 63488 00:17:17.318 }, 00:17:17.318 { 00:17:17.318 "name": "BaseBdev2", 00:17:17.318 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:17.318 "is_configured": true, 00:17:17.318 "data_offset": 2048, 00:17:17.318 "data_size": 63488 00:17:17.318 }, 00:17:17.318 { 00:17:17.318 "name": "BaseBdev3", 00:17:17.318 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:17.318 "is_configured": true, 00:17:17.318 "data_offset": 2048, 00:17:17.318 "data_size": 63488 00:17:17.318 }, 00:17:17.318 { 00:17:17.318 "name": "BaseBdev4", 00:17:17.318 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:17.318 "is_configured": true, 00:17:17.318 "data_offset": 2048, 00:17:17.318 "data_size": 63488 00:17:17.318 } 00:17:17.318 ] 00:17:17.318 }' 00:17:17.577 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.577 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.577 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.577 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.577 13:33:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.516 "name": "raid_bdev1", 00:17:18.516 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:18.516 "strip_size_kb": 64, 00:17:18.516 "state": "online", 00:17:18.516 "raid_level": "raid5f", 00:17:18.516 "superblock": true, 00:17:18.516 "num_base_bdevs": 4, 00:17:18.516 "num_base_bdevs_discovered": 4, 00:17:18.516 "num_base_bdevs_operational": 4, 00:17:18.516 "process": { 00:17:18.516 "type": "rebuild", 00:17:18.516 "target": "spare", 00:17:18.516 "progress": { 00:17:18.516 "blocks": 86400, 00:17:18.516 "percent": 45 00:17:18.516 } 00:17:18.516 }, 00:17:18.516 "base_bdevs_list": [ 00:17:18.516 { 00:17:18.516 "name": "spare", 00:17:18.516 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:18.516 "is_configured": true, 00:17:18.516 "data_offset": 2048, 00:17:18.516 "data_size": 63488 00:17:18.516 }, 00:17:18.516 { 00:17:18.516 "name": "BaseBdev2", 00:17:18.516 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:18.516 "is_configured": true, 00:17:18.516 "data_offset": 2048, 00:17:18.516 "data_size": 63488 00:17:18.516 }, 00:17:18.516 { 00:17:18.516 "name": "BaseBdev3", 00:17:18.516 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:18.516 "is_configured": true, 00:17:18.516 "data_offset": 2048, 00:17:18.516 "data_size": 63488 00:17:18.516 }, 00:17:18.516 { 00:17:18.516 "name": "BaseBdev4", 00:17:18.516 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:18.516 "is_configured": true, 00:17:18.516 "data_offset": 2048, 00:17:18.516 "data_size": 63488 00:17:18.516 } 00:17:18.516 ] 00:17:18.516 }' 00:17:18.516 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.776 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.776 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.776 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.776 13:33:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.715 "name": "raid_bdev1", 00:17:19.715 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:19.715 "strip_size_kb": 64, 00:17:19.715 "state": "online", 00:17:19.715 "raid_level": "raid5f", 00:17:19.715 "superblock": true, 00:17:19.715 "num_base_bdevs": 4, 00:17:19.715 "num_base_bdevs_discovered": 4, 00:17:19.715 "num_base_bdevs_operational": 4, 00:17:19.715 "process": { 00:17:19.715 "type": "rebuild", 00:17:19.715 "target": "spare", 00:17:19.715 "progress": { 00:17:19.715 "blocks": 109440, 00:17:19.715 "percent": 57 00:17:19.715 } 00:17:19.715 }, 00:17:19.715 "base_bdevs_list": [ 00:17:19.715 { 00:17:19.715 "name": "spare", 00:17:19.715 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev2", 00:17:19.715 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev3", 00:17:19.715 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev4", 00:17:19.715 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 } 00:17:19.715 ] 00:17:19.715 }' 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.715 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.975 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.975 13:33:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.914 "name": "raid_bdev1", 00:17:20.914 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:20.914 "strip_size_kb": 64, 00:17:20.914 "state": "online", 00:17:20.914 "raid_level": "raid5f", 00:17:20.914 "superblock": true, 00:17:20.914 "num_base_bdevs": 4, 00:17:20.914 "num_base_bdevs_discovered": 4, 00:17:20.914 "num_base_bdevs_operational": 4, 00:17:20.914 "process": { 00:17:20.914 "type": "rebuild", 00:17:20.914 "target": "spare", 00:17:20.914 "progress": { 00:17:20.914 "blocks": 130560, 00:17:20.914 "percent": 68 00:17:20.914 } 00:17:20.914 }, 00:17:20.914 "base_bdevs_list": [ 00:17:20.914 { 00:17:20.914 "name": "spare", 00:17:20.914 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:20.914 "is_configured": true, 00:17:20.914 "data_offset": 2048, 00:17:20.914 "data_size": 63488 00:17:20.914 }, 00:17:20.914 { 00:17:20.914 "name": "BaseBdev2", 00:17:20.914 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:20.914 "is_configured": true, 00:17:20.914 "data_offset": 2048, 00:17:20.914 "data_size": 63488 00:17:20.914 }, 00:17:20.914 { 00:17:20.914 "name": "BaseBdev3", 00:17:20.914 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:20.914 "is_configured": true, 00:17:20.914 "data_offset": 2048, 00:17:20.914 "data_size": 63488 00:17:20.914 }, 00:17:20.914 { 00:17:20.914 "name": "BaseBdev4", 00:17:20.914 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:20.914 "is_configured": true, 00:17:20.914 "data_offset": 2048, 00:17:20.914 "data_size": 63488 00:17:20.914 } 00:17:20.914 ] 00:17:20.914 }' 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.914 13:33:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.296 "name": "raid_bdev1", 00:17:22.296 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:22.296 "strip_size_kb": 64, 00:17:22.296 "state": "online", 00:17:22.296 "raid_level": "raid5f", 00:17:22.296 "superblock": true, 00:17:22.296 "num_base_bdevs": 4, 00:17:22.296 "num_base_bdevs_discovered": 4, 00:17:22.296 "num_base_bdevs_operational": 4, 00:17:22.296 "process": { 00:17:22.296 "type": "rebuild", 00:17:22.296 "target": "spare", 00:17:22.296 "progress": { 00:17:22.296 "blocks": 153600, 00:17:22.296 "percent": 80 00:17:22.296 } 00:17:22.296 }, 00:17:22.296 "base_bdevs_list": [ 00:17:22.296 { 00:17:22.296 "name": "spare", 00:17:22.296 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:22.296 "is_configured": true, 00:17:22.296 "data_offset": 2048, 00:17:22.296 "data_size": 63488 00:17:22.296 }, 00:17:22.296 { 00:17:22.296 "name": "BaseBdev2", 00:17:22.296 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:22.296 "is_configured": true, 00:17:22.296 "data_offset": 2048, 00:17:22.296 "data_size": 63488 00:17:22.296 }, 00:17:22.296 { 00:17:22.296 "name": "BaseBdev3", 00:17:22.296 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:22.296 "is_configured": true, 00:17:22.296 "data_offset": 2048, 00:17:22.296 "data_size": 63488 00:17:22.296 }, 00:17:22.296 { 00:17:22.296 "name": "BaseBdev4", 00:17:22.296 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:22.296 "is_configured": true, 00:17:22.296 "data_offset": 2048, 00:17:22.296 "data_size": 63488 00:17:22.296 } 00:17:22.296 ] 00:17:22.296 }' 00:17:22.296 13:33:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.296 13:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.296 13:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.296 13:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.296 13:33:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.237 "name": "raid_bdev1", 00:17:23.237 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:23.237 "strip_size_kb": 64, 00:17:23.237 "state": "online", 00:17:23.237 "raid_level": "raid5f", 00:17:23.237 "superblock": true, 00:17:23.237 "num_base_bdevs": 4, 00:17:23.237 "num_base_bdevs_discovered": 4, 00:17:23.237 "num_base_bdevs_operational": 4, 00:17:23.237 "process": { 00:17:23.237 "type": "rebuild", 00:17:23.237 "target": "spare", 00:17:23.237 "progress": { 00:17:23.237 "blocks": 174720, 00:17:23.237 "percent": 91 00:17:23.237 } 00:17:23.237 }, 00:17:23.237 "base_bdevs_list": [ 00:17:23.237 { 00:17:23.237 "name": "spare", 00:17:23.237 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:23.237 "is_configured": true, 00:17:23.237 "data_offset": 2048, 00:17:23.237 "data_size": 63488 00:17:23.237 }, 00:17:23.237 { 00:17:23.237 "name": "BaseBdev2", 00:17:23.237 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:23.237 "is_configured": true, 00:17:23.237 "data_offset": 2048, 00:17:23.237 "data_size": 63488 00:17:23.237 }, 00:17:23.237 { 00:17:23.237 "name": "BaseBdev3", 00:17:23.237 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:23.237 "is_configured": true, 00:17:23.237 "data_offset": 2048, 00:17:23.237 "data_size": 63488 00:17:23.237 }, 00:17:23.237 { 00:17:23.237 "name": "BaseBdev4", 00:17:23.237 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:23.237 "is_configured": true, 00:17:23.237 "data_offset": 2048, 00:17:23.237 "data_size": 63488 00:17:23.237 } 00:17:23.237 ] 00:17:23.237 }' 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.237 13:33:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.177 [2024-11-18 13:33:53.910373] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:24.177 [2024-11-18 13:33:53.910438] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:24.177 [2024-11-18 13:33:53.910546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.438 "name": "raid_bdev1", 00:17:24.438 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:24.438 "strip_size_kb": 64, 00:17:24.438 "state": "online", 00:17:24.438 "raid_level": "raid5f", 00:17:24.438 "superblock": true, 00:17:24.438 "num_base_bdevs": 4, 00:17:24.438 "num_base_bdevs_discovered": 4, 00:17:24.438 "num_base_bdevs_operational": 4, 00:17:24.438 "base_bdevs_list": [ 00:17:24.438 { 00:17:24.438 "name": "spare", 00:17:24.438 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev2", 00:17:24.438 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev3", 00:17:24.438 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev4", 00:17:24.438 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 } 00:17:24.438 ] 00:17:24.438 }' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.438 "name": "raid_bdev1", 00:17:24.438 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:24.438 "strip_size_kb": 64, 00:17:24.438 "state": "online", 00:17:24.438 "raid_level": "raid5f", 00:17:24.438 "superblock": true, 00:17:24.438 "num_base_bdevs": 4, 00:17:24.438 "num_base_bdevs_discovered": 4, 00:17:24.438 "num_base_bdevs_operational": 4, 00:17:24.438 "base_bdevs_list": [ 00:17:24.438 { 00:17:24.438 "name": "spare", 00:17:24.438 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev2", 00:17:24.438 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev3", 00:17:24.438 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 }, 00:17:24.438 { 00:17:24.438 "name": "BaseBdev4", 00:17:24.438 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:24.438 "is_configured": true, 00:17:24.438 "data_offset": 2048, 00:17:24.438 "data_size": 63488 00:17:24.438 } 00:17:24.438 ] 00:17:24.438 }' 00:17:24.438 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.699 "name": "raid_bdev1", 00:17:24.699 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:24.699 "strip_size_kb": 64, 00:17:24.699 "state": "online", 00:17:24.699 "raid_level": "raid5f", 00:17:24.699 "superblock": true, 00:17:24.699 "num_base_bdevs": 4, 00:17:24.699 "num_base_bdevs_discovered": 4, 00:17:24.699 "num_base_bdevs_operational": 4, 00:17:24.699 "base_bdevs_list": [ 00:17:24.699 { 00:17:24.699 "name": "spare", 00:17:24.699 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:24.699 "is_configured": true, 00:17:24.699 "data_offset": 2048, 00:17:24.699 "data_size": 63488 00:17:24.699 }, 00:17:24.699 { 00:17:24.699 "name": "BaseBdev2", 00:17:24.699 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:24.699 "is_configured": true, 00:17:24.699 "data_offset": 2048, 00:17:24.699 "data_size": 63488 00:17:24.699 }, 00:17:24.699 { 00:17:24.699 "name": "BaseBdev3", 00:17:24.699 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:24.699 "is_configured": true, 00:17:24.699 "data_offset": 2048, 00:17:24.699 "data_size": 63488 00:17:24.699 }, 00:17:24.699 { 00:17:24.699 "name": "BaseBdev4", 00:17:24.699 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:24.699 "is_configured": true, 00:17:24.699 "data_offset": 2048, 00:17:24.699 "data_size": 63488 00:17:24.699 } 00:17:24.699 ] 00:17:24.699 }' 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.699 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.959 [2024-11-18 13:33:54.929226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.959 [2024-11-18 13:33:54.929302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.959 [2024-11-18 13:33:54.929381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.959 [2024-11-18 13:33:54.929471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.959 [2024-11-18 13:33:54.929491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:24.959 13:33:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:25.220 /dev/nbd0 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.220 1+0 records in 00:17:25.220 1+0 records out 00:17:25.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316147 s, 13.0 MB/s 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.220 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:25.479 /dev/nbd1 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.479 1+0 records in 00:17:25.479 1+0 records out 00:17:25.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411172 s, 10.0 MB/s 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:25.479 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.739 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.999 13:33:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.259 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.260 [2024-11-18 13:33:56.121518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.260 [2024-11-18 13:33:56.121575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.260 [2024-11-18 13:33:56.121602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:26.260 [2024-11-18 13:33:56.121612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.260 [2024-11-18 13:33:56.123770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.260 [2024-11-18 13:33:56.123805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.260 [2024-11-18 13:33:56.123894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:26.260 [2024-11-18 13:33:56.123944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.260 [2024-11-18 13:33:56.124079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.260 [2024-11-18 13:33:56.124184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.260 [2024-11-18 13:33:56.124264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.260 spare 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.260 [2024-11-18 13:33:56.224163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:26.260 [2024-11-18 13:33:56.224195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:26.260 [2024-11-18 13:33:56.224447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:26.260 [2024-11-18 13:33:56.231323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:26.260 [2024-11-18 13:33:56.231345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:26.260 [2024-11-18 13:33:56.231526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.260 "name": "raid_bdev1", 00:17:26.260 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:26.260 "strip_size_kb": 64, 00:17:26.260 "state": "online", 00:17:26.260 "raid_level": "raid5f", 00:17:26.260 "superblock": true, 00:17:26.260 "num_base_bdevs": 4, 00:17:26.260 "num_base_bdevs_discovered": 4, 00:17:26.260 "num_base_bdevs_operational": 4, 00:17:26.260 "base_bdevs_list": [ 00:17:26.260 { 00:17:26.260 "name": "spare", 00:17:26.260 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:26.260 "is_configured": true, 00:17:26.260 "data_offset": 2048, 00:17:26.260 "data_size": 63488 00:17:26.260 }, 00:17:26.260 { 00:17:26.260 "name": "BaseBdev2", 00:17:26.260 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:26.260 "is_configured": true, 00:17:26.260 "data_offset": 2048, 00:17:26.260 "data_size": 63488 00:17:26.260 }, 00:17:26.260 { 00:17:26.260 "name": "BaseBdev3", 00:17:26.260 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:26.260 "is_configured": true, 00:17:26.260 "data_offset": 2048, 00:17:26.260 "data_size": 63488 00:17:26.260 }, 00:17:26.260 { 00:17:26.260 "name": "BaseBdev4", 00:17:26.260 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:26.260 "is_configured": true, 00:17:26.260 "data_offset": 2048, 00:17:26.260 "data_size": 63488 00:17:26.260 } 00:17:26.260 ] 00:17:26.260 }' 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.260 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.830 "name": "raid_bdev1", 00:17:26.830 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:26.830 "strip_size_kb": 64, 00:17:26.830 "state": "online", 00:17:26.830 "raid_level": "raid5f", 00:17:26.830 "superblock": true, 00:17:26.830 "num_base_bdevs": 4, 00:17:26.830 "num_base_bdevs_discovered": 4, 00:17:26.830 "num_base_bdevs_operational": 4, 00:17:26.830 "base_bdevs_list": [ 00:17:26.830 { 00:17:26.830 "name": "spare", 00:17:26.830 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:26.830 "is_configured": true, 00:17:26.830 "data_offset": 2048, 00:17:26.830 "data_size": 63488 00:17:26.830 }, 00:17:26.830 { 00:17:26.830 "name": "BaseBdev2", 00:17:26.830 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:26.830 "is_configured": true, 00:17:26.830 "data_offset": 2048, 00:17:26.830 "data_size": 63488 00:17:26.830 }, 00:17:26.830 { 00:17:26.830 "name": "BaseBdev3", 00:17:26.830 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:26.830 "is_configured": true, 00:17:26.830 "data_offset": 2048, 00:17:26.830 "data_size": 63488 00:17:26.830 }, 00:17:26.830 { 00:17:26.830 "name": "BaseBdev4", 00:17:26.830 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:26.830 "is_configured": true, 00:17:26.830 "data_offset": 2048, 00:17:26.830 "data_size": 63488 00:17:26.830 } 00:17:26.830 ] 00:17:26.830 }' 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:26.830 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.091 [2024-11-18 13:33:56.898305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.091 "name": "raid_bdev1", 00:17:27.091 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:27.091 "strip_size_kb": 64, 00:17:27.091 "state": "online", 00:17:27.091 "raid_level": "raid5f", 00:17:27.091 "superblock": true, 00:17:27.091 "num_base_bdevs": 4, 00:17:27.091 "num_base_bdevs_discovered": 3, 00:17:27.091 "num_base_bdevs_operational": 3, 00:17:27.091 "base_bdevs_list": [ 00:17:27.091 { 00:17:27.091 "name": null, 00:17:27.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.091 "is_configured": false, 00:17:27.091 "data_offset": 0, 00:17:27.091 "data_size": 63488 00:17:27.091 }, 00:17:27.091 { 00:17:27.091 "name": "BaseBdev2", 00:17:27.091 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:27.091 "is_configured": true, 00:17:27.091 "data_offset": 2048, 00:17:27.091 "data_size": 63488 00:17:27.091 }, 00:17:27.091 { 00:17:27.091 "name": "BaseBdev3", 00:17:27.091 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:27.091 "is_configured": true, 00:17:27.091 "data_offset": 2048, 00:17:27.091 "data_size": 63488 00:17:27.091 }, 00:17:27.091 { 00:17:27.091 "name": "BaseBdev4", 00:17:27.091 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:27.091 "is_configured": true, 00:17:27.091 "data_offset": 2048, 00:17:27.091 "data_size": 63488 00:17:27.091 } 00:17:27.091 ] 00:17:27.091 }' 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.091 13:33:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 13:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.351 13:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.351 13:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.351 [2024-11-18 13:33:57.333622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.351 [2024-11-18 13:33:57.333788] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.351 [2024-11-18 13:33:57.333808] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:27.351 [2024-11-18 13:33:57.333838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.351 [2024-11-18 13:33:57.347895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:27.351 13:33:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.351 13:33:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:27.351 [2024-11-18 13:33:57.356151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.734 "name": "raid_bdev1", 00:17:28.734 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:28.734 "strip_size_kb": 64, 00:17:28.734 "state": "online", 00:17:28.734 "raid_level": "raid5f", 00:17:28.734 "superblock": true, 00:17:28.734 "num_base_bdevs": 4, 00:17:28.734 "num_base_bdevs_discovered": 4, 00:17:28.734 "num_base_bdevs_operational": 4, 00:17:28.734 "process": { 00:17:28.734 "type": "rebuild", 00:17:28.734 "target": "spare", 00:17:28.734 "progress": { 00:17:28.734 "blocks": 19200, 00:17:28.735 "percent": 10 00:17:28.735 } 00:17:28.735 }, 00:17:28.735 "base_bdevs_list": [ 00:17:28.735 { 00:17:28.735 "name": "spare", 00:17:28.735 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev2", 00:17:28.735 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev3", 00:17:28.735 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev4", 00:17:28.735 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 } 00:17:28.735 ] 00:17:28.735 }' 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.735 [2024-11-18 13:33:58.515028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.735 [2024-11-18 13:33:58.561748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.735 [2024-11-18 13:33:58.561806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.735 [2024-11-18 13:33:58.561822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.735 [2024-11-18 13:33:58.561830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.735 "name": "raid_bdev1", 00:17:28.735 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:28.735 "strip_size_kb": 64, 00:17:28.735 "state": "online", 00:17:28.735 "raid_level": "raid5f", 00:17:28.735 "superblock": true, 00:17:28.735 "num_base_bdevs": 4, 00:17:28.735 "num_base_bdevs_discovered": 3, 00:17:28.735 "num_base_bdevs_operational": 3, 00:17:28.735 "base_bdevs_list": [ 00:17:28.735 { 00:17:28.735 "name": null, 00:17:28.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.735 "is_configured": false, 00:17:28.735 "data_offset": 0, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev2", 00:17:28.735 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev3", 00:17:28.735 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 }, 00:17:28.735 { 00:17:28.735 "name": "BaseBdev4", 00:17:28.735 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:28.735 "is_configured": true, 00:17:28.735 "data_offset": 2048, 00:17:28.735 "data_size": 63488 00:17:28.735 } 00:17:28.735 ] 00:17:28.735 }' 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.735 13:33:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.312 13:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:29.313 13:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.313 13:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.313 [2024-11-18 13:33:59.074691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:29.313 [2024-11-18 13:33:59.074749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.313 [2024-11-18 13:33:59.074778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:29.313 [2024-11-18 13:33:59.074790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.313 [2024-11-18 13:33:59.075275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.313 [2024-11-18 13:33:59.075298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:29.313 [2024-11-18 13:33:59.075388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:29.313 [2024-11-18 13:33:59.075404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.313 [2024-11-18 13:33:59.075413] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:29.313 [2024-11-18 13:33:59.075439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.313 [2024-11-18 13:33:59.088762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:29.313 spare 00:17:29.313 13:33:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.313 13:33:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:29.313 [2024-11-18 13:33:59.097351] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.258 "name": "raid_bdev1", 00:17:30.258 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:30.258 "strip_size_kb": 64, 00:17:30.258 "state": "online", 00:17:30.258 "raid_level": "raid5f", 00:17:30.258 "superblock": true, 00:17:30.258 "num_base_bdevs": 4, 00:17:30.258 "num_base_bdevs_discovered": 4, 00:17:30.258 "num_base_bdevs_operational": 4, 00:17:30.258 "process": { 00:17:30.258 "type": "rebuild", 00:17:30.258 "target": "spare", 00:17:30.258 "progress": { 00:17:30.258 "blocks": 19200, 00:17:30.258 "percent": 10 00:17:30.258 } 00:17:30.258 }, 00:17:30.258 "base_bdevs_list": [ 00:17:30.258 { 00:17:30.258 "name": "spare", 00:17:30.258 "uuid": "5828f753-f9a9-535c-8cda-e2c371dc63f8", 00:17:30.258 "is_configured": true, 00:17:30.258 "data_offset": 2048, 00:17:30.258 "data_size": 63488 00:17:30.258 }, 00:17:30.258 { 00:17:30.258 "name": "BaseBdev2", 00:17:30.258 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:30.258 "is_configured": true, 00:17:30.258 "data_offset": 2048, 00:17:30.258 "data_size": 63488 00:17:30.258 }, 00:17:30.258 { 00:17:30.258 "name": "BaseBdev3", 00:17:30.258 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:30.258 "is_configured": true, 00:17:30.258 "data_offset": 2048, 00:17:30.258 "data_size": 63488 00:17:30.258 }, 00:17:30.258 { 00:17:30.258 "name": "BaseBdev4", 00:17:30.258 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:30.258 "is_configured": true, 00:17:30.258 "data_offset": 2048, 00:17:30.258 "data_size": 63488 00:17:30.258 } 00:17:30.258 ] 00:17:30.258 }' 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.258 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.258 [2024-11-18 13:34:00.228099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.258 [2024-11-18 13:34:00.302966] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.258 [2024-11-18 13:34:00.303013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.258 [2024-11-18 13:34:00.303031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.258 [2024-11-18 13:34:00.303038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.518 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.519 "name": "raid_bdev1", 00:17:30.519 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:30.519 "strip_size_kb": 64, 00:17:30.519 "state": "online", 00:17:30.519 "raid_level": "raid5f", 00:17:30.519 "superblock": true, 00:17:30.519 "num_base_bdevs": 4, 00:17:30.519 "num_base_bdevs_discovered": 3, 00:17:30.519 "num_base_bdevs_operational": 3, 00:17:30.519 "base_bdevs_list": [ 00:17:30.519 { 00:17:30.519 "name": null, 00:17:30.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.519 "is_configured": false, 00:17:30.519 "data_offset": 0, 00:17:30.519 "data_size": 63488 00:17:30.519 }, 00:17:30.519 { 00:17:30.519 "name": "BaseBdev2", 00:17:30.519 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:30.519 "is_configured": true, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 }, 00:17:30.519 { 00:17:30.519 "name": "BaseBdev3", 00:17:30.519 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:30.519 "is_configured": true, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 }, 00:17:30.519 { 00:17:30.519 "name": "BaseBdev4", 00:17:30.519 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:30.519 "is_configured": true, 00:17:30.519 "data_offset": 2048, 00:17:30.519 "data_size": 63488 00:17:30.519 } 00:17:30.519 ] 00:17:30.519 }' 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.519 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.779 "name": "raid_bdev1", 00:17:30.779 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:30.779 "strip_size_kb": 64, 00:17:30.779 "state": "online", 00:17:30.779 "raid_level": "raid5f", 00:17:30.779 "superblock": true, 00:17:30.779 "num_base_bdevs": 4, 00:17:30.779 "num_base_bdevs_discovered": 3, 00:17:30.779 "num_base_bdevs_operational": 3, 00:17:30.779 "base_bdevs_list": [ 00:17:30.779 { 00:17:30.779 "name": null, 00:17:30.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.779 "is_configured": false, 00:17:30.779 "data_offset": 0, 00:17:30.779 "data_size": 63488 00:17:30.779 }, 00:17:30.779 { 00:17:30.779 "name": "BaseBdev2", 00:17:30.779 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:30.779 "is_configured": true, 00:17:30.779 "data_offset": 2048, 00:17:30.779 "data_size": 63488 00:17:30.779 }, 00:17:30.779 { 00:17:30.779 "name": "BaseBdev3", 00:17:30.779 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:30.779 "is_configured": true, 00:17:30.779 "data_offset": 2048, 00:17:30.779 "data_size": 63488 00:17:30.779 }, 00:17:30.779 { 00:17:30.779 "name": "BaseBdev4", 00:17:30.779 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:30.779 "is_configured": true, 00:17:30.779 "data_offset": 2048, 00:17:30.779 "data_size": 63488 00:17:30.779 } 00:17:30.779 ] 00:17:30.779 }' 00:17:30.779 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.039 [2024-11-18 13:34:00.908124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:31.039 [2024-11-18 13:34:00.908183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.039 [2024-11-18 13:34:00.908218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:31.039 [2024-11-18 13:34:00.908228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.039 [2024-11-18 13:34:00.908669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.039 [2024-11-18 13:34:00.908686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:31.039 [2024-11-18 13:34:00.908760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:31.039 [2024-11-18 13:34:00.908775] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:31.039 [2024-11-18 13:34:00.908789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:31.039 [2024-11-18 13:34:00.908799] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:31.039 BaseBdev1 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.039 13:34:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.978 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.979 "name": "raid_bdev1", 00:17:31.979 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:31.979 "strip_size_kb": 64, 00:17:31.979 "state": "online", 00:17:31.979 "raid_level": "raid5f", 00:17:31.979 "superblock": true, 00:17:31.979 "num_base_bdevs": 4, 00:17:31.979 "num_base_bdevs_discovered": 3, 00:17:31.979 "num_base_bdevs_operational": 3, 00:17:31.979 "base_bdevs_list": [ 00:17:31.979 { 00:17:31.979 "name": null, 00:17:31.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.979 "is_configured": false, 00:17:31.979 "data_offset": 0, 00:17:31.979 "data_size": 63488 00:17:31.979 }, 00:17:31.979 { 00:17:31.979 "name": "BaseBdev2", 00:17:31.979 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:31.979 "is_configured": true, 00:17:31.979 "data_offset": 2048, 00:17:31.979 "data_size": 63488 00:17:31.979 }, 00:17:31.979 { 00:17:31.979 "name": "BaseBdev3", 00:17:31.979 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:31.979 "is_configured": true, 00:17:31.979 "data_offset": 2048, 00:17:31.979 "data_size": 63488 00:17:31.979 }, 00:17:31.979 { 00:17:31.979 "name": "BaseBdev4", 00:17:31.979 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:31.979 "is_configured": true, 00:17:31.979 "data_offset": 2048, 00:17:31.979 "data_size": 63488 00:17:31.979 } 00:17:31.979 ] 00:17:31.979 }' 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.979 13:34:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.548 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.548 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.548 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.548 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.549 "name": "raid_bdev1", 00:17:32.549 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:32.549 "strip_size_kb": 64, 00:17:32.549 "state": "online", 00:17:32.549 "raid_level": "raid5f", 00:17:32.549 "superblock": true, 00:17:32.549 "num_base_bdevs": 4, 00:17:32.549 "num_base_bdevs_discovered": 3, 00:17:32.549 "num_base_bdevs_operational": 3, 00:17:32.549 "base_bdevs_list": [ 00:17:32.549 { 00:17:32.549 "name": null, 00:17:32.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.549 "is_configured": false, 00:17:32.549 "data_offset": 0, 00:17:32.549 "data_size": 63488 00:17:32.549 }, 00:17:32.549 { 00:17:32.549 "name": "BaseBdev2", 00:17:32.549 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:32.549 "is_configured": true, 00:17:32.549 "data_offset": 2048, 00:17:32.549 "data_size": 63488 00:17:32.549 }, 00:17:32.549 { 00:17:32.549 "name": "BaseBdev3", 00:17:32.549 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:32.549 "is_configured": true, 00:17:32.549 "data_offset": 2048, 00:17:32.549 "data_size": 63488 00:17:32.549 }, 00:17:32.549 { 00:17:32.549 "name": "BaseBdev4", 00:17:32.549 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:32.549 "is_configured": true, 00:17:32.549 "data_offset": 2048, 00:17:32.549 "data_size": 63488 00:17:32.549 } 00:17:32.549 ] 00:17:32.549 }' 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.549 [2024-11-18 13:34:02.517444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.549 [2024-11-18 13:34:02.517592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:32.549 [2024-11-18 13:34:02.517609] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:32.549 request: 00:17:32.549 { 00:17:32.549 "base_bdev": "BaseBdev1", 00:17:32.549 "raid_bdev": "raid_bdev1", 00:17:32.549 "method": "bdev_raid_add_base_bdev", 00:17:32.549 "req_id": 1 00:17:32.549 } 00:17:32.549 Got JSON-RPC error response 00:17:32.549 response: 00:17:32.549 { 00:17:32.549 "code": -22, 00:17:32.549 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:32.549 } 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:32.549 13:34:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.489 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.749 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.749 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.749 "name": "raid_bdev1", 00:17:33.749 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:33.749 "strip_size_kb": 64, 00:17:33.749 "state": "online", 00:17:33.749 "raid_level": "raid5f", 00:17:33.749 "superblock": true, 00:17:33.749 "num_base_bdevs": 4, 00:17:33.749 "num_base_bdevs_discovered": 3, 00:17:33.749 "num_base_bdevs_operational": 3, 00:17:33.749 "base_bdevs_list": [ 00:17:33.749 { 00:17:33.749 "name": null, 00:17:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.749 "is_configured": false, 00:17:33.749 "data_offset": 0, 00:17:33.749 "data_size": 63488 00:17:33.749 }, 00:17:33.749 { 00:17:33.749 "name": "BaseBdev2", 00:17:33.749 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:33.749 "is_configured": true, 00:17:33.749 "data_offset": 2048, 00:17:33.749 "data_size": 63488 00:17:33.749 }, 00:17:33.749 { 00:17:33.749 "name": "BaseBdev3", 00:17:33.749 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:33.749 "is_configured": true, 00:17:33.749 "data_offset": 2048, 00:17:33.749 "data_size": 63488 00:17:33.749 }, 00:17:33.749 { 00:17:33.749 "name": "BaseBdev4", 00:17:33.749 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:33.749 "is_configured": true, 00:17:33.749 "data_offset": 2048, 00:17:33.750 "data_size": 63488 00:17:33.750 } 00:17:33.750 ] 00:17:33.750 }' 00:17:33.750 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.750 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.009 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.009 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.009 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.009 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.009 13:34:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.009 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.009 "name": "raid_bdev1", 00:17:34.009 "uuid": "07f207d1-2be6-4ddd-a144-791620501743", 00:17:34.009 "strip_size_kb": 64, 00:17:34.009 "state": "online", 00:17:34.009 "raid_level": "raid5f", 00:17:34.009 "superblock": true, 00:17:34.009 "num_base_bdevs": 4, 00:17:34.009 "num_base_bdevs_discovered": 3, 00:17:34.009 "num_base_bdevs_operational": 3, 00:17:34.009 "base_bdevs_list": [ 00:17:34.009 { 00:17:34.009 "name": null, 00:17:34.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.010 "is_configured": false, 00:17:34.010 "data_offset": 0, 00:17:34.010 "data_size": 63488 00:17:34.010 }, 00:17:34.010 { 00:17:34.010 "name": "BaseBdev2", 00:17:34.010 "uuid": "c7bdd53e-65d1-508f-b0b9-7f5483b9ffb2", 00:17:34.010 "is_configured": true, 00:17:34.010 "data_offset": 2048, 00:17:34.010 "data_size": 63488 00:17:34.010 }, 00:17:34.010 { 00:17:34.010 "name": "BaseBdev3", 00:17:34.010 "uuid": "e380bb95-993d-5992-82b0-c0cc3c7e32f5", 00:17:34.010 "is_configured": true, 00:17:34.010 "data_offset": 2048, 00:17:34.010 "data_size": 63488 00:17:34.010 }, 00:17:34.010 { 00:17:34.010 "name": "BaseBdev4", 00:17:34.010 "uuid": "73928b17-d85a-5d36-9887-9920edfde71b", 00:17:34.010 "is_configured": true, 00:17:34.010 "data_offset": 2048, 00:17:34.010 "data_size": 63488 00:17:34.010 } 00:17:34.010 ] 00:17:34.010 }' 00:17:34.010 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85060 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85060 ']' 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85060 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85060 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.270 killing process with pid 85060 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85060' 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85060 00:17:34.270 Received shutdown signal, test time was about 60.000000 seconds 00:17:34.270 00:17:34.270 Latency(us) 00:17:34.270 [2024-11-18T13:34:04.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.270 [2024-11-18T13:34:04.324Z] =================================================================================================================== 00:17:34.270 [2024-11-18T13:34:04.324Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.270 [2024-11-18 13:34:04.172584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.270 13:34:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85060 00:17:34.270 [2024-11-18 13:34:04.172729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.270 [2024-11-18 13:34:04.172808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.270 [2024-11-18 13:34:04.172820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:34.841 [2024-11-18 13:34:04.637516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.785 13:34:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:35.785 ************************************ 00:17:35.785 END TEST raid5f_rebuild_test_sb 00:17:35.785 ************************************ 00:17:35.785 00:17:35.785 real 0m26.841s 00:17:35.785 user 0m33.727s 00:17:35.785 sys 0m3.034s 00:17:35.785 13:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.785 13:34:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 13:34:05 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:35.785 13:34:05 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:35.785 13:34:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:35.785 13:34:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.785 13:34:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.785 ************************************ 00:17:35.785 START TEST raid_state_function_test_sb_4k 00:17:35.785 ************************************ 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:35.785 Process raid pid: 85870 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85870 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85870' 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85870 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85870 ']' 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.785 13:34:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.079 [2024-11-18 13:34:05.836851] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:36.079 [2024-11-18 13:34:05.837052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.079 [2024-11-18 13:34:06.014549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.354 [2024-11-18 13:34:06.124425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.354 [2024-11-18 13:34:06.320563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.354 [2024-11-18 13:34:06.320599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.614 [2024-11-18 13:34:06.657223] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.614 [2024-11-18 13:34:06.657269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.614 [2024-11-18 13:34:06.657279] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.614 [2024-11-18 13:34:06.657288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.614 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.874 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.874 "name": "Existed_Raid", 00:17:36.874 "uuid": "d2472152-5c20-4547-8aad-51e123a4691d", 00:17:36.874 "strip_size_kb": 0, 00:17:36.874 "state": "configuring", 00:17:36.874 "raid_level": "raid1", 00:17:36.874 "superblock": true, 00:17:36.874 "num_base_bdevs": 2, 00:17:36.874 "num_base_bdevs_discovered": 0, 00:17:36.874 "num_base_bdevs_operational": 2, 00:17:36.874 "base_bdevs_list": [ 00:17:36.874 { 00:17:36.874 "name": "BaseBdev1", 00:17:36.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.874 "is_configured": false, 00:17:36.874 "data_offset": 0, 00:17:36.874 "data_size": 0 00:17:36.874 }, 00:17:36.874 { 00:17:36.874 "name": "BaseBdev2", 00:17:36.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.875 "is_configured": false, 00:17:36.875 "data_offset": 0, 00:17:36.875 "data_size": 0 00:17:36.875 } 00:17:36.875 ] 00:17:36.875 }' 00:17:36.875 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.875 13:34:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 [2024-11-18 13:34:07.108361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.135 [2024-11-18 13:34:07.108439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 [2024-11-18 13:34:07.120350] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.135 [2024-11-18 13:34:07.120422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.135 [2024-11-18 13:34:07.120450] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.135 [2024-11-18 13:34:07.120473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 [2024-11-18 13:34:07.167838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.135 BaseBdev1 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.135 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.395 [ 00:17:37.395 { 00:17:37.395 "name": "BaseBdev1", 00:17:37.395 "aliases": [ 00:17:37.395 "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa" 00:17:37.395 ], 00:17:37.395 "product_name": "Malloc disk", 00:17:37.395 "block_size": 4096, 00:17:37.395 "num_blocks": 8192, 00:17:37.395 "uuid": "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa", 00:17:37.395 "assigned_rate_limits": { 00:17:37.395 "rw_ios_per_sec": 0, 00:17:37.395 "rw_mbytes_per_sec": 0, 00:17:37.395 "r_mbytes_per_sec": 0, 00:17:37.395 "w_mbytes_per_sec": 0 00:17:37.395 }, 00:17:37.395 "claimed": true, 00:17:37.395 "claim_type": "exclusive_write", 00:17:37.395 "zoned": false, 00:17:37.395 "supported_io_types": { 00:17:37.395 "read": true, 00:17:37.395 "write": true, 00:17:37.395 "unmap": true, 00:17:37.395 "flush": true, 00:17:37.395 "reset": true, 00:17:37.395 "nvme_admin": false, 00:17:37.395 "nvme_io": false, 00:17:37.395 "nvme_io_md": false, 00:17:37.395 "write_zeroes": true, 00:17:37.395 "zcopy": true, 00:17:37.395 "get_zone_info": false, 00:17:37.395 "zone_management": false, 00:17:37.395 "zone_append": false, 00:17:37.395 "compare": false, 00:17:37.395 "compare_and_write": false, 00:17:37.395 "abort": true, 00:17:37.395 "seek_hole": false, 00:17:37.395 "seek_data": false, 00:17:37.395 "copy": true, 00:17:37.395 "nvme_iov_md": false 00:17:37.395 }, 00:17:37.395 "memory_domains": [ 00:17:37.395 { 00:17:37.395 "dma_device_id": "system", 00:17:37.395 "dma_device_type": 1 00:17:37.395 }, 00:17:37.395 { 00:17:37.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.395 "dma_device_type": 2 00:17:37.395 } 00:17:37.395 ], 00:17:37.395 "driver_specific": {} 00:17:37.395 } 00:17:37.395 ] 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.395 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.395 "name": "Existed_Raid", 00:17:37.395 "uuid": "1cc3423b-b1d4-4b8f-8ceb-25bbc1d8be1d", 00:17:37.395 "strip_size_kb": 0, 00:17:37.395 "state": "configuring", 00:17:37.395 "raid_level": "raid1", 00:17:37.395 "superblock": true, 00:17:37.395 "num_base_bdevs": 2, 00:17:37.396 "num_base_bdevs_discovered": 1, 00:17:37.396 "num_base_bdevs_operational": 2, 00:17:37.396 "base_bdevs_list": [ 00:17:37.396 { 00:17:37.396 "name": "BaseBdev1", 00:17:37.396 "uuid": "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa", 00:17:37.396 "is_configured": true, 00:17:37.396 "data_offset": 256, 00:17:37.396 "data_size": 7936 00:17:37.396 }, 00:17:37.396 { 00:17:37.396 "name": "BaseBdev2", 00:17:37.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.396 "is_configured": false, 00:17:37.396 "data_offset": 0, 00:17:37.396 "data_size": 0 00:17:37.396 } 00:17:37.396 ] 00:17:37.396 }' 00:17:37.396 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.396 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.655 [2024-11-18 13:34:07.639031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.655 [2024-11-18 13:34:07.639070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:37.655 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.656 [2024-11-18 13:34:07.651094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:37.656 [2024-11-18 13:34:07.652840] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.656 [2024-11-18 13:34:07.652911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.656 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.915 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.915 "name": "Existed_Raid", 00:17:37.915 "uuid": "86afaf6c-0ea7-4a90-97a8-3ff3446c8182", 00:17:37.915 "strip_size_kb": 0, 00:17:37.915 "state": "configuring", 00:17:37.915 "raid_level": "raid1", 00:17:37.915 "superblock": true, 00:17:37.915 "num_base_bdevs": 2, 00:17:37.915 "num_base_bdevs_discovered": 1, 00:17:37.915 "num_base_bdevs_operational": 2, 00:17:37.915 "base_bdevs_list": [ 00:17:37.915 { 00:17:37.915 "name": "BaseBdev1", 00:17:37.915 "uuid": "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa", 00:17:37.916 "is_configured": true, 00:17:37.916 "data_offset": 256, 00:17:37.916 "data_size": 7936 00:17:37.916 }, 00:17:37.916 { 00:17:37.916 "name": "BaseBdev2", 00:17:37.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.916 "is_configured": false, 00:17:37.916 "data_offset": 0, 00:17:37.916 "data_size": 0 00:17:37.916 } 00:17:37.916 ] 00:17:37.916 }' 00:17:37.916 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.916 13:34:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 [2024-11-18 13:34:08.124102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.176 [2024-11-18 13:34:08.124374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:38.176 [2024-11-18 13:34:08.124389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:38.176 [2024-11-18 13:34:08.124631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:38.176 [2024-11-18 13:34:08.124782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:38.176 [2024-11-18 13:34:08.124794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:38.176 [2024-11-18 13:34:08.124924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.176 BaseBdev2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 [ 00:17:38.176 { 00:17:38.176 "name": "BaseBdev2", 00:17:38.176 "aliases": [ 00:17:38.176 "0153aa5f-85b8-4393-989f-b1ce17e10a9a" 00:17:38.176 ], 00:17:38.176 "product_name": "Malloc disk", 00:17:38.176 "block_size": 4096, 00:17:38.176 "num_blocks": 8192, 00:17:38.176 "uuid": "0153aa5f-85b8-4393-989f-b1ce17e10a9a", 00:17:38.176 "assigned_rate_limits": { 00:17:38.176 "rw_ios_per_sec": 0, 00:17:38.176 "rw_mbytes_per_sec": 0, 00:17:38.176 "r_mbytes_per_sec": 0, 00:17:38.176 "w_mbytes_per_sec": 0 00:17:38.176 }, 00:17:38.176 "claimed": true, 00:17:38.176 "claim_type": "exclusive_write", 00:17:38.176 "zoned": false, 00:17:38.176 "supported_io_types": { 00:17:38.176 "read": true, 00:17:38.176 "write": true, 00:17:38.176 "unmap": true, 00:17:38.176 "flush": true, 00:17:38.176 "reset": true, 00:17:38.176 "nvme_admin": false, 00:17:38.176 "nvme_io": false, 00:17:38.176 "nvme_io_md": false, 00:17:38.176 "write_zeroes": true, 00:17:38.176 "zcopy": true, 00:17:38.176 "get_zone_info": false, 00:17:38.176 "zone_management": false, 00:17:38.176 "zone_append": false, 00:17:38.176 "compare": false, 00:17:38.176 "compare_and_write": false, 00:17:38.176 "abort": true, 00:17:38.176 "seek_hole": false, 00:17:38.176 "seek_data": false, 00:17:38.176 "copy": true, 00:17:38.176 "nvme_iov_md": false 00:17:38.176 }, 00:17:38.176 "memory_domains": [ 00:17:38.176 { 00:17:38.176 "dma_device_id": "system", 00:17:38.176 "dma_device_type": 1 00:17:38.176 }, 00:17:38.176 { 00:17:38.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.176 "dma_device_type": 2 00:17:38.176 } 00:17:38.176 ], 00:17:38.176 "driver_specific": {} 00:17:38.176 } 00:17:38.176 ] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.176 "name": "Existed_Raid", 00:17:38.176 "uuid": "86afaf6c-0ea7-4a90-97a8-3ff3446c8182", 00:17:38.176 "strip_size_kb": 0, 00:17:38.176 "state": "online", 00:17:38.176 "raid_level": "raid1", 00:17:38.176 "superblock": true, 00:17:38.176 "num_base_bdevs": 2, 00:17:38.176 "num_base_bdevs_discovered": 2, 00:17:38.176 "num_base_bdevs_operational": 2, 00:17:38.176 "base_bdevs_list": [ 00:17:38.176 { 00:17:38.176 "name": "BaseBdev1", 00:17:38.176 "uuid": "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa", 00:17:38.176 "is_configured": true, 00:17:38.176 "data_offset": 256, 00:17:38.176 "data_size": 7936 00:17:38.176 }, 00:17:38.176 { 00:17:38.176 "name": "BaseBdev2", 00:17:38.176 "uuid": "0153aa5f-85b8-4393-989f-b1ce17e10a9a", 00:17:38.176 "is_configured": true, 00:17:38.176 "data_offset": 256, 00:17:38.176 "data_size": 7936 00:17:38.176 } 00:17:38.176 ] 00:17:38.176 }' 00:17:38.176 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.177 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 [2024-11-18 13:34:08.587564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:38.747 "name": "Existed_Raid", 00:17:38.747 "aliases": [ 00:17:38.747 "86afaf6c-0ea7-4a90-97a8-3ff3446c8182" 00:17:38.747 ], 00:17:38.747 "product_name": "Raid Volume", 00:17:38.747 "block_size": 4096, 00:17:38.747 "num_blocks": 7936, 00:17:38.747 "uuid": "86afaf6c-0ea7-4a90-97a8-3ff3446c8182", 00:17:38.747 "assigned_rate_limits": { 00:17:38.747 "rw_ios_per_sec": 0, 00:17:38.747 "rw_mbytes_per_sec": 0, 00:17:38.747 "r_mbytes_per_sec": 0, 00:17:38.747 "w_mbytes_per_sec": 0 00:17:38.747 }, 00:17:38.747 "claimed": false, 00:17:38.747 "zoned": false, 00:17:38.747 "supported_io_types": { 00:17:38.747 "read": true, 00:17:38.747 "write": true, 00:17:38.747 "unmap": false, 00:17:38.747 "flush": false, 00:17:38.747 "reset": true, 00:17:38.747 "nvme_admin": false, 00:17:38.747 "nvme_io": false, 00:17:38.747 "nvme_io_md": false, 00:17:38.747 "write_zeroes": true, 00:17:38.747 "zcopy": false, 00:17:38.747 "get_zone_info": false, 00:17:38.747 "zone_management": false, 00:17:38.747 "zone_append": false, 00:17:38.747 "compare": false, 00:17:38.747 "compare_and_write": false, 00:17:38.747 "abort": false, 00:17:38.747 "seek_hole": false, 00:17:38.747 "seek_data": false, 00:17:38.747 "copy": false, 00:17:38.747 "nvme_iov_md": false 00:17:38.747 }, 00:17:38.747 "memory_domains": [ 00:17:38.747 { 00:17:38.747 "dma_device_id": "system", 00:17:38.747 "dma_device_type": 1 00:17:38.747 }, 00:17:38.747 { 00:17:38.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.747 "dma_device_type": 2 00:17:38.747 }, 00:17:38.747 { 00:17:38.747 "dma_device_id": "system", 00:17:38.747 "dma_device_type": 1 00:17:38.747 }, 00:17:38.747 { 00:17:38.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.747 "dma_device_type": 2 00:17:38.747 } 00:17:38.747 ], 00:17:38.747 "driver_specific": { 00:17:38.747 "raid": { 00:17:38.747 "uuid": "86afaf6c-0ea7-4a90-97a8-3ff3446c8182", 00:17:38.747 "strip_size_kb": 0, 00:17:38.747 "state": "online", 00:17:38.747 "raid_level": "raid1", 00:17:38.747 "superblock": true, 00:17:38.747 "num_base_bdevs": 2, 00:17:38.747 "num_base_bdevs_discovered": 2, 00:17:38.747 "num_base_bdevs_operational": 2, 00:17:38.747 "base_bdevs_list": [ 00:17:38.747 { 00:17:38.747 "name": "BaseBdev1", 00:17:38.747 "uuid": "8d0b9af2-aa1a-48b1-9a12-5111497dd8aa", 00:17:38.747 "is_configured": true, 00:17:38.747 "data_offset": 256, 00:17:38.747 "data_size": 7936 00:17:38.747 }, 00:17:38.747 { 00:17:38.747 "name": "BaseBdev2", 00:17:38.747 "uuid": "0153aa5f-85b8-4393-989f-b1ce17e10a9a", 00:17:38.747 "is_configured": true, 00:17:38.747 "data_offset": 256, 00:17:38.747 "data_size": 7936 00:17:38.747 } 00:17:38.747 ] 00:17:38.747 } 00:17:38.747 } 00:17:38.747 }' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:38.747 BaseBdev2' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:38.747 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.748 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.009 [2024-11-18 13:34:08.823024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.009 "name": "Existed_Raid", 00:17:39.009 "uuid": "86afaf6c-0ea7-4a90-97a8-3ff3446c8182", 00:17:39.009 "strip_size_kb": 0, 00:17:39.009 "state": "online", 00:17:39.009 "raid_level": "raid1", 00:17:39.009 "superblock": true, 00:17:39.009 "num_base_bdevs": 2, 00:17:39.009 "num_base_bdevs_discovered": 1, 00:17:39.009 "num_base_bdevs_operational": 1, 00:17:39.009 "base_bdevs_list": [ 00:17:39.009 { 00:17:39.009 "name": null, 00:17:39.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.009 "is_configured": false, 00:17:39.009 "data_offset": 0, 00:17:39.009 "data_size": 7936 00:17:39.009 }, 00:17:39.009 { 00:17:39.009 "name": "BaseBdev2", 00:17:39.009 "uuid": "0153aa5f-85b8-4393-989f-b1ce17e10a9a", 00:17:39.009 "is_configured": true, 00:17:39.009 "data_offset": 256, 00:17:39.009 "data_size": 7936 00:17:39.009 } 00:17:39.009 ] 00:17:39.009 }' 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.009 13:34:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 [2024-11-18 13:34:09.449746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.587 [2024-11-18 13:34:09.449846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.587 [2024-11-18 13:34:09.541374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.587 [2024-11-18 13:34:09.541508] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.587 [2024-11-18 13:34:09.541526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85870 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85870 ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85870 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85870 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.587 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85870' 00:17:39.847 killing process with pid 85870 00:17:39.847 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85870 00:17:39.847 [2024-11-18 13:34:09.639912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.847 13:34:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85870 00:17:39.847 [2024-11-18 13:34:09.656822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.787 13:34:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:40.787 00:17:40.787 real 0m4.960s 00:17:40.787 user 0m7.190s 00:17:40.787 sys 0m0.862s 00:17:40.787 ************************************ 00:17:40.787 END TEST raid_state_function_test_sb_4k 00:17:40.787 ************************************ 00:17:40.787 13:34:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.787 13:34:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.787 13:34:10 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:40.787 13:34:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:40.787 13:34:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.787 13:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.787 ************************************ 00:17:40.787 START TEST raid_superblock_test_4k 00:17:40.787 ************************************ 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:40.787 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86120 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86120 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86120 ']' 00:17:40.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.788 13:34:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.048 [2024-11-18 13:34:10.869031] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:41.048 [2024-11-18 13:34:10.869156] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86120 ] 00:17:41.048 [2024-11-18 13:34:11.042822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.308 [2024-11-18 13:34:11.150851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.308 [2024-11-18 13:34:11.341674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.308 [2024-11-18 13:34:11.341704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.878 malloc1 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.878 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.878 [2024-11-18 13:34:11.735232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.879 [2024-11-18 13:34:11.735336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.879 [2024-11-18 13:34:11.735379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.879 [2024-11-18 13:34:11.735408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.879 [2024-11-18 13:34:11.737383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.879 [2024-11-18 13:34:11.737463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.879 pt1 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.879 malloc2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.879 [2024-11-18 13:34:11.792440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.879 [2024-11-18 13:34:11.792526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.879 [2024-11-18 13:34:11.792560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.879 [2024-11-18 13:34:11.792587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.879 [2024-11-18 13:34:11.794481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.879 [2024-11-18 13:34:11.794548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.879 pt2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.879 [2024-11-18 13:34:11.804491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.879 [2024-11-18 13:34:11.806118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.879 [2024-11-18 13:34:11.806301] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:41.879 [2024-11-18 13:34:11.806317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:41.879 [2024-11-18 13:34:11.806521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:41.879 [2024-11-18 13:34:11.806673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:41.879 [2024-11-18 13:34:11.806687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:41.879 [2024-11-18 13:34:11.806817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.879 "name": "raid_bdev1", 00:17:41.879 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:41.879 "strip_size_kb": 0, 00:17:41.879 "state": "online", 00:17:41.879 "raid_level": "raid1", 00:17:41.879 "superblock": true, 00:17:41.879 "num_base_bdevs": 2, 00:17:41.879 "num_base_bdevs_discovered": 2, 00:17:41.879 "num_base_bdevs_operational": 2, 00:17:41.879 "base_bdevs_list": [ 00:17:41.879 { 00:17:41.879 "name": "pt1", 00:17:41.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.879 "is_configured": true, 00:17:41.879 "data_offset": 256, 00:17:41.879 "data_size": 7936 00:17:41.879 }, 00:17:41.879 { 00:17:41.879 "name": "pt2", 00:17:41.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.879 "is_configured": true, 00:17:41.879 "data_offset": 256, 00:17:41.879 "data_size": 7936 00:17:41.879 } 00:17:41.879 ] 00:17:41.879 }' 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.879 13:34:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.449 [2024-11-18 13:34:12.259868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.449 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:42.449 "name": "raid_bdev1", 00:17:42.449 "aliases": [ 00:17:42.449 "4c113c0a-cd34-4699-afb5-7c0c2335a390" 00:17:42.449 ], 00:17:42.449 "product_name": "Raid Volume", 00:17:42.449 "block_size": 4096, 00:17:42.449 "num_blocks": 7936, 00:17:42.449 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:42.449 "assigned_rate_limits": { 00:17:42.449 "rw_ios_per_sec": 0, 00:17:42.449 "rw_mbytes_per_sec": 0, 00:17:42.449 "r_mbytes_per_sec": 0, 00:17:42.449 "w_mbytes_per_sec": 0 00:17:42.449 }, 00:17:42.449 "claimed": false, 00:17:42.449 "zoned": false, 00:17:42.449 "supported_io_types": { 00:17:42.449 "read": true, 00:17:42.449 "write": true, 00:17:42.449 "unmap": false, 00:17:42.449 "flush": false, 00:17:42.449 "reset": true, 00:17:42.449 "nvme_admin": false, 00:17:42.449 "nvme_io": false, 00:17:42.449 "nvme_io_md": false, 00:17:42.449 "write_zeroes": true, 00:17:42.449 "zcopy": false, 00:17:42.449 "get_zone_info": false, 00:17:42.449 "zone_management": false, 00:17:42.449 "zone_append": false, 00:17:42.449 "compare": false, 00:17:42.449 "compare_and_write": false, 00:17:42.449 "abort": false, 00:17:42.449 "seek_hole": false, 00:17:42.449 "seek_data": false, 00:17:42.449 "copy": false, 00:17:42.449 "nvme_iov_md": false 00:17:42.449 }, 00:17:42.449 "memory_domains": [ 00:17:42.449 { 00:17:42.449 "dma_device_id": "system", 00:17:42.449 "dma_device_type": 1 00:17:42.449 }, 00:17:42.449 { 00:17:42.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.449 "dma_device_type": 2 00:17:42.449 }, 00:17:42.449 { 00:17:42.449 "dma_device_id": "system", 00:17:42.449 "dma_device_type": 1 00:17:42.449 }, 00:17:42.449 { 00:17:42.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.450 "dma_device_type": 2 00:17:42.450 } 00:17:42.450 ], 00:17:42.450 "driver_specific": { 00:17:42.450 "raid": { 00:17:42.450 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:42.450 "strip_size_kb": 0, 00:17:42.450 "state": "online", 00:17:42.450 "raid_level": "raid1", 00:17:42.450 "superblock": true, 00:17:42.450 "num_base_bdevs": 2, 00:17:42.450 "num_base_bdevs_discovered": 2, 00:17:42.450 "num_base_bdevs_operational": 2, 00:17:42.450 "base_bdevs_list": [ 00:17:42.450 { 00:17:42.450 "name": "pt1", 00:17:42.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.450 "is_configured": true, 00:17:42.450 "data_offset": 256, 00:17:42.450 "data_size": 7936 00:17:42.450 }, 00:17:42.450 { 00:17:42.450 "name": "pt2", 00:17:42.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.450 "is_configured": true, 00:17:42.450 "data_offset": 256, 00:17:42.450 "data_size": 7936 00:17:42.450 } 00:17:42.450 ] 00:17:42.450 } 00:17:42.450 } 00:17:42.450 }' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:42.450 pt2' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.450 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:42.450 [2024-11-18 13:34:12.491465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4c113c0a-cd34-4699-afb5-7c0c2335a390 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4c113c0a-cd34-4699-afb5-7c0c2335a390 ']' 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.710 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.710 [2024-11-18 13:34:12.539164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.710 [2024-11-18 13:34:12.539224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.711 [2024-11-18 13:34:12.539310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.711 [2024-11-18 13:34:12.539381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.711 [2024-11-18 13:34:12.539432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 [2024-11-18 13:34:12.675002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:42.711 [2024-11-18 13:34:12.676735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:42.711 [2024-11-18 13:34:12.676849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:42.711 [2024-11-18 13:34:12.676900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:42.711 [2024-11-18 13:34:12.676914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.711 [2024-11-18 13:34:12.676923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:42.711 request: 00:17:42.711 { 00:17:42.711 "name": "raid_bdev1", 00:17:42.711 "raid_level": "raid1", 00:17:42.711 "base_bdevs": [ 00:17:42.711 "malloc1", 00:17:42.711 "malloc2" 00:17:42.711 ], 00:17:42.711 "superblock": false, 00:17:42.711 "method": "bdev_raid_create", 00:17:42.711 "req_id": 1 00:17:42.711 } 00:17:42.711 Got JSON-RPC error response 00:17:42.711 response: 00:17:42.711 { 00:17:42.711 "code": -17, 00:17:42.711 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:42.711 } 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 [2024-11-18 13:34:12.734933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:42.711 [2024-11-18 13:34:12.735018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.711 [2024-11-18 13:34:12.735035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:42.711 [2024-11-18 13:34:12.735045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.711 [2024-11-18 13:34:12.736986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.711 [2024-11-18 13:34:12.737025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:42.711 [2024-11-18 13:34:12.737085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:42.711 [2024-11-18 13:34:12.737155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:42.711 pt1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.711 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.971 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.971 "name": "raid_bdev1", 00:17:42.971 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:42.971 "strip_size_kb": 0, 00:17:42.971 "state": "configuring", 00:17:42.971 "raid_level": "raid1", 00:17:42.971 "superblock": true, 00:17:42.971 "num_base_bdevs": 2, 00:17:42.971 "num_base_bdevs_discovered": 1, 00:17:42.971 "num_base_bdevs_operational": 2, 00:17:42.971 "base_bdevs_list": [ 00:17:42.971 { 00:17:42.971 "name": "pt1", 00:17:42.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.971 "is_configured": true, 00:17:42.971 "data_offset": 256, 00:17:42.971 "data_size": 7936 00:17:42.971 }, 00:17:42.971 { 00:17:42.971 "name": null, 00:17:42.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.971 "is_configured": false, 00:17:42.971 "data_offset": 256, 00:17:42.971 "data_size": 7936 00:17:42.971 } 00:17:42.971 ] 00:17:42.971 }' 00:17:42.971 13:34:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.971 13:34:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.231 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.231 [2024-11-18 13:34:13.198120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.232 [2024-11-18 13:34:13.198225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.232 [2024-11-18 13:34:13.198259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:43.232 [2024-11-18 13:34:13.198288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.232 [2024-11-18 13:34:13.198668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.232 [2024-11-18 13:34:13.198729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.232 [2024-11-18 13:34:13.198819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:43.232 [2024-11-18 13:34:13.198876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.232 [2024-11-18 13:34:13.199006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:43.232 [2024-11-18 13:34:13.199045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.232 [2024-11-18 13:34:13.199289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:43.232 [2024-11-18 13:34:13.199471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:43.232 [2024-11-18 13:34:13.199512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:43.232 [2024-11-18 13:34:13.199666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.232 pt2 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.232 "name": "raid_bdev1", 00:17:43.232 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:43.232 "strip_size_kb": 0, 00:17:43.232 "state": "online", 00:17:43.232 "raid_level": "raid1", 00:17:43.232 "superblock": true, 00:17:43.232 "num_base_bdevs": 2, 00:17:43.232 "num_base_bdevs_discovered": 2, 00:17:43.232 "num_base_bdevs_operational": 2, 00:17:43.232 "base_bdevs_list": [ 00:17:43.232 { 00:17:43.232 "name": "pt1", 00:17:43.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.232 "is_configured": true, 00:17:43.232 "data_offset": 256, 00:17:43.232 "data_size": 7936 00:17:43.232 }, 00:17:43.232 { 00:17:43.232 "name": "pt2", 00:17:43.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.232 "is_configured": true, 00:17:43.232 "data_offset": 256, 00:17:43.232 "data_size": 7936 00:17:43.232 } 00:17:43.232 ] 00:17:43.232 }' 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.232 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.802 [2024-11-18 13:34:13.677510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.802 "name": "raid_bdev1", 00:17:43.802 "aliases": [ 00:17:43.802 "4c113c0a-cd34-4699-afb5-7c0c2335a390" 00:17:43.802 ], 00:17:43.802 "product_name": "Raid Volume", 00:17:43.802 "block_size": 4096, 00:17:43.802 "num_blocks": 7936, 00:17:43.802 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:43.802 "assigned_rate_limits": { 00:17:43.802 "rw_ios_per_sec": 0, 00:17:43.802 "rw_mbytes_per_sec": 0, 00:17:43.802 "r_mbytes_per_sec": 0, 00:17:43.802 "w_mbytes_per_sec": 0 00:17:43.802 }, 00:17:43.802 "claimed": false, 00:17:43.802 "zoned": false, 00:17:43.802 "supported_io_types": { 00:17:43.802 "read": true, 00:17:43.802 "write": true, 00:17:43.802 "unmap": false, 00:17:43.802 "flush": false, 00:17:43.802 "reset": true, 00:17:43.802 "nvme_admin": false, 00:17:43.802 "nvme_io": false, 00:17:43.802 "nvme_io_md": false, 00:17:43.802 "write_zeroes": true, 00:17:43.802 "zcopy": false, 00:17:43.802 "get_zone_info": false, 00:17:43.802 "zone_management": false, 00:17:43.802 "zone_append": false, 00:17:43.802 "compare": false, 00:17:43.802 "compare_and_write": false, 00:17:43.802 "abort": false, 00:17:43.802 "seek_hole": false, 00:17:43.802 "seek_data": false, 00:17:43.802 "copy": false, 00:17:43.802 "nvme_iov_md": false 00:17:43.802 }, 00:17:43.802 "memory_domains": [ 00:17:43.802 { 00:17:43.802 "dma_device_id": "system", 00:17:43.802 "dma_device_type": 1 00:17:43.802 }, 00:17:43.802 { 00:17:43.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.802 "dma_device_type": 2 00:17:43.802 }, 00:17:43.802 { 00:17:43.802 "dma_device_id": "system", 00:17:43.802 "dma_device_type": 1 00:17:43.802 }, 00:17:43.802 { 00:17:43.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.802 "dma_device_type": 2 00:17:43.802 } 00:17:43.802 ], 00:17:43.802 "driver_specific": { 00:17:43.802 "raid": { 00:17:43.802 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:43.802 "strip_size_kb": 0, 00:17:43.802 "state": "online", 00:17:43.802 "raid_level": "raid1", 00:17:43.802 "superblock": true, 00:17:43.802 "num_base_bdevs": 2, 00:17:43.802 "num_base_bdevs_discovered": 2, 00:17:43.802 "num_base_bdevs_operational": 2, 00:17:43.802 "base_bdevs_list": [ 00:17:43.802 { 00:17:43.802 "name": "pt1", 00:17:43.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.802 "is_configured": true, 00:17:43.802 "data_offset": 256, 00:17:43.802 "data_size": 7936 00:17:43.802 }, 00:17:43.802 { 00:17:43.802 "name": "pt2", 00:17:43.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.802 "is_configured": true, 00:17:43.802 "data_offset": 256, 00:17:43.802 "data_size": 7936 00:17:43.802 } 00:17:43.802 ] 00:17:43.802 } 00:17:43.802 } 00:17:43.802 }' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:43.802 pt2' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.802 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.062 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:44.062 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:44.063 [2024-11-18 13:34:13.929071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4c113c0a-cd34-4699-afb5-7c0c2335a390 '!=' 4c113c0a-cd34-4699-afb5-7c0c2335a390 ']' 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 [2024-11-18 13:34:13.972820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.063 13:34:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.063 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.063 "name": "raid_bdev1", 00:17:44.063 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:44.063 "strip_size_kb": 0, 00:17:44.063 "state": "online", 00:17:44.063 "raid_level": "raid1", 00:17:44.063 "superblock": true, 00:17:44.063 "num_base_bdevs": 2, 00:17:44.063 "num_base_bdevs_discovered": 1, 00:17:44.063 "num_base_bdevs_operational": 1, 00:17:44.063 "base_bdevs_list": [ 00:17:44.063 { 00:17:44.063 "name": null, 00:17:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.063 "is_configured": false, 00:17:44.063 "data_offset": 0, 00:17:44.063 "data_size": 7936 00:17:44.063 }, 00:17:44.063 { 00:17:44.063 "name": "pt2", 00:17:44.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.063 "is_configured": true, 00:17:44.063 "data_offset": 256, 00:17:44.063 "data_size": 7936 00:17:44.063 } 00:17:44.063 ] 00:17:44.063 }' 00:17:44.063 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.063 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.633 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.633 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 [2024-11-18 13:34:14.435999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.634 [2024-11-18 13:34:14.436064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.634 [2024-11-18 13:34:14.436145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.634 [2024-11-18 13:34:14.436199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.634 [2024-11-18 13:34:14.436243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 [2024-11-18 13:34:14.511861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.634 [2024-11-18 13:34:14.511945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.634 [2024-11-18 13:34:14.511964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:44.634 [2024-11-18 13:34:14.511972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.634 [2024-11-18 13:34:14.513982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.634 [2024-11-18 13:34:14.514023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.634 [2024-11-18 13:34:14.514088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:44.634 [2024-11-18 13:34:14.514147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.634 [2024-11-18 13:34:14.514243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:44.634 [2024-11-18 13:34:14.514255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.634 [2024-11-18 13:34:14.514456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:44.634 [2024-11-18 13:34:14.514585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:44.634 [2024-11-18 13:34:14.514604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:44.634 [2024-11-18 13:34:14.514729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.634 pt2 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.634 "name": "raid_bdev1", 00:17:44.634 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:44.634 "strip_size_kb": 0, 00:17:44.634 "state": "online", 00:17:44.634 "raid_level": "raid1", 00:17:44.634 "superblock": true, 00:17:44.634 "num_base_bdevs": 2, 00:17:44.634 "num_base_bdevs_discovered": 1, 00:17:44.634 "num_base_bdevs_operational": 1, 00:17:44.634 "base_bdevs_list": [ 00:17:44.634 { 00:17:44.634 "name": null, 00:17:44.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.634 "is_configured": false, 00:17:44.634 "data_offset": 256, 00:17:44.634 "data_size": 7936 00:17:44.634 }, 00:17:44.634 { 00:17:44.634 "name": "pt2", 00:17:44.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:44.634 "is_configured": true, 00:17:44.634 "data_offset": 256, 00:17:44.634 "data_size": 7936 00:17:44.634 } 00:17:44.634 ] 00:17:44.634 }' 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.634 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 [2024-11-18 13:34:14.987102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.205 [2024-11-18 13:34:14.987168] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.205 [2024-11-18 13:34:14.987220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.205 [2024-11-18 13:34:14.987259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.205 [2024-11-18 13:34:14.987267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.205 13:34:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 [2024-11-18 13:34:15.047018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.205 [2024-11-18 13:34:15.047104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.205 [2024-11-18 13:34:15.047145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:45.205 [2024-11-18 13:34:15.047173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.205 [2024-11-18 13:34:15.049258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.205 [2024-11-18 13:34:15.049323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.205 [2024-11-18 13:34:15.049408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:45.205 [2024-11-18 13:34:15.049463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:45.205 [2024-11-18 13:34:15.049614] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:45.205 [2024-11-18 13:34:15.049664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.205 [2024-11-18 13:34:15.049702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:45.205 [2024-11-18 13:34:15.049804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.205 [2024-11-18 13:34:15.049908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:45.205 [2024-11-18 13:34:15.049946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.205 [2024-11-18 13:34:15.050187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:45.205 [2024-11-18 13:34:15.050357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:45.205 [2024-11-18 13:34:15.050400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:45.205 [2024-11-18 13:34:15.050581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.205 pt1 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.205 "name": "raid_bdev1", 00:17:45.205 "uuid": "4c113c0a-cd34-4699-afb5-7c0c2335a390", 00:17:45.205 "strip_size_kb": 0, 00:17:45.205 "state": "online", 00:17:45.205 "raid_level": "raid1", 00:17:45.205 "superblock": true, 00:17:45.205 "num_base_bdevs": 2, 00:17:45.205 "num_base_bdevs_discovered": 1, 00:17:45.205 "num_base_bdevs_operational": 1, 00:17:45.205 "base_bdevs_list": [ 00:17:45.205 { 00:17:45.205 "name": null, 00:17:45.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.205 "is_configured": false, 00:17:45.205 "data_offset": 256, 00:17:45.205 "data_size": 7936 00:17:45.205 }, 00:17:45.205 { 00:17:45.205 "name": "pt2", 00:17:45.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.205 "is_configured": true, 00:17:45.205 "data_offset": 256, 00:17:45.205 "data_size": 7936 00:17:45.205 } 00:17:45.205 ] 00:17:45.205 }' 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.205 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.465 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:45.465 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.465 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.465 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:45.465 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.725 [2024-11-18 13:34:15.534396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4c113c0a-cd34-4699-afb5-7c0c2335a390 '!=' 4c113c0a-cd34-4699-afb5-7c0c2335a390 ']' 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86120 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86120 ']' 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86120 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86120 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86120' 00:17:45.725 killing process with pid 86120 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86120 00:17:45.725 [2024-11-18 13:34:15.619135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.725 [2024-11-18 13:34:15.619198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.725 [2024-11-18 13:34:15.619233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.725 [2024-11-18 13:34:15.619245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:45.725 13:34:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86120 00:17:45.985 [2024-11-18 13:34:15.813163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.925 ************************************ 00:17:46.925 END TEST raid_superblock_test_4k 00:17:46.925 13:34:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:46.925 00:17:46.925 real 0m6.073s 00:17:46.925 user 0m9.237s 00:17:46.925 sys 0m1.146s 00:17:46.925 13:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.925 13:34:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.925 ************************************ 00:17:46.925 13:34:16 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:46.925 13:34:16 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:46.925 13:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:46.925 13:34:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.925 13:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.925 ************************************ 00:17:46.925 START TEST raid_rebuild_test_sb_4k 00:17:46.925 ************************************ 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86443 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86443 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86443 ']' 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.925 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.926 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.926 13:34:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.186 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:47.186 Zero copy mechanism will not be used. 00:17:47.186 [2024-11-18 13:34:17.030752] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:47.186 [2024-11-18 13:34:17.030858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86443 ] 00:17:47.186 [2024-11-18 13:34:17.204915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.446 [2024-11-18 13:34:17.310159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.446 [2024-11-18 13:34:17.496252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.446 [2024-11-18 13:34:17.496302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 BaseBdev1_malloc 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 [2024-11-18 13:34:17.899729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.017 [2024-11-18 13:34:17.899795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.017 [2024-11-18 13:34:17.899817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.017 [2024-11-18 13:34:17.899828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.017 [2024-11-18 13:34:17.901825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.017 [2024-11-18 13:34:17.901864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.017 BaseBdev1 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 BaseBdev2_malloc 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 [2024-11-18 13:34:17.952778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:48.017 [2024-11-18 13:34:17.952831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.017 [2024-11-18 13:34:17.952847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.017 [2024-11-18 13:34:17.952858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.017 [2024-11-18 13:34:17.954798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.017 [2024-11-18 13:34:17.954884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:48.017 BaseBdev2 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 spare_malloc 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 spare_delay 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 [2024-11-18 13:34:18.054016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.017 [2024-11-18 13:34:18.054073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.017 [2024-11-18 13:34:18.054090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:48.017 [2024-11-18 13:34:18.054100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.017 [2024-11-18 13:34:18.056120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.017 [2024-11-18 13:34:18.056170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.017 spare 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.017 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.017 [2024-11-18 13:34:18.066056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.278 [2024-11-18 13:34:18.067870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.278 [2024-11-18 13:34:18.068053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.278 [2024-11-18 13:34:18.068069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.278 [2024-11-18 13:34:18.068319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:48.278 [2024-11-18 13:34:18.068468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.278 [2024-11-18 13:34:18.068476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.278 [2024-11-18 13:34:18.068611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.278 "name": "raid_bdev1", 00:17:48.278 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:48.278 "strip_size_kb": 0, 00:17:48.278 "state": "online", 00:17:48.278 "raid_level": "raid1", 00:17:48.278 "superblock": true, 00:17:48.278 "num_base_bdevs": 2, 00:17:48.278 "num_base_bdevs_discovered": 2, 00:17:48.278 "num_base_bdevs_operational": 2, 00:17:48.278 "base_bdevs_list": [ 00:17:48.278 { 00:17:48.278 "name": "BaseBdev1", 00:17:48.278 "uuid": "ba022c66-f17f-5f50-bbfd-9289d7c542e4", 00:17:48.278 "is_configured": true, 00:17:48.278 "data_offset": 256, 00:17:48.278 "data_size": 7936 00:17:48.278 }, 00:17:48.278 { 00:17:48.278 "name": "BaseBdev2", 00:17:48.278 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:48.278 "is_configured": true, 00:17:48.278 "data_offset": 256, 00:17:48.278 "data_size": 7936 00:17:48.278 } 00:17:48.278 ] 00:17:48.278 }' 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.278 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.539 [2024-11-18 13:34:18.529466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.539 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:48.799 [2024-11-18 13:34:18.784792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:48.799 /dev/nbd0 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.799 1+0 records in 00:17:48.799 1+0 records out 00:17:48.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340013 s, 12.0 MB/s 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.799 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:49.059 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.059 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.059 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:49.059 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:49.059 13:34:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:49.630 7936+0 records in 00:17:49.630 7936+0 records out 00:17:49.630 32505856 bytes (33 MB, 31 MiB) copied, 0.587985 s, 55.3 MB/s 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:49.630 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:49.630 [2024-11-18 13:34:19.681352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.891 [2024-11-18 13:34:19.699009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.891 "name": "raid_bdev1", 00:17:49.891 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:49.891 "strip_size_kb": 0, 00:17:49.891 "state": "online", 00:17:49.891 "raid_level": "raid1", 00:17:49.891 "superblock": true, 00:17:49.891 "num_base_bdevs": 2, 00:17:49.891 "num_base_bdevs_discovered": 1, 00:17:49.891 "num_base_bdevs_operational": 1, 00:17:49.891 "base_bdevs_list": [ 00:17:49.891 { 00:17:49.891 "name": null, 00:17:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.891 "is_configured": false, 00:17:49.891 "data_offset": 0, 00:17:49.891 "data_size": 7936 00:17:49.891 }, 00:17:49.891 { 00:17:49.891 "name": "BaseBdev2", 00:17:49.891 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:49.891 "is_configured": true, 00:17:49.891 "data_offset": 256, 00:17:49.891 "data_size": 7936 00:17:49.891 } 00:17:49.891 ] 00:17:49.891 }' 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.891 13:34:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.151 13:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.151 13:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.151 13:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.151 [2024-11-18 13:34:20.182240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.151 [2024-11-18 13:34:20.198416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:50.151 13:34:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.151 13:34:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:50.151 [2024-11-18 13:34:20.200227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.533 "name": "raid_bdev1", 00:17:51.533 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:51.533 "strip_size_kb": 0, 00:17:51.533 "state": "online", 00:17:51.533 "raid_level": "raid1", 00:17:51.533 "superblock": true, 00:17:51.533 "num_base_bdevs": 2, 00:17:51.533 "num_base_bdevs_discovered": 2, 00:17:51.533 "num_base_bdevs_operational": 2, 00:17:51.533 "process": { 00:17:51.533 "type": "rebuild", 00:17:51.533 "target": "spare", 00:17:51.533 "progress": { 00:17:51.533 "blocks": 2560, 00:17:51.533 "percent": 32 00:17:51.533 } 00:17:51.533 }, 00:17:51.533 "base_bdevs_list": [ 00:17:51.533 { 00:17:51.533 "name": "spare", 00:17:51.533 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:51.533 "is_configured": true, 00:17:51.533 "data_offset": 256, 00:17:51.533 "data_size": 7936 00:17:51.533 }, 00:17:51.533 { 00:17:51.533 "name": "BaseBdev2", 00:17:51.533 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:51.533 "is_configured": true, 00:17:51.533 "data_offset": 256, 00:17:51.533 "data_size": 7936 00:17:51.533 } 00:17:51.533 ] 00:17:51.533 }' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.533 [2024-11-18 13:34:21.360417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.533 [2024-11-18 13:34:21.404829] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.533 [2024-11-18 13:34:21.404888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.533 [2024-11-18 13:34:21.404901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.533 [2024-11-18 13:34:21.404910] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.533 "name": "raid_bdev1", 00:17:51.533 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:51.533 "strip_size_kb": 0, 00:17:51.533 "state": "online", 00:17:51.533 "raid_level": "raid1", 00:17:51.533 "superblock": true, 00:17:51.533 "num_base_bdevs": 2, 00:17:51.533 "num_base_bdevs_discovered": 1, 00:17:51.533 "num_base_bdevs_operational": 1, 00:17:51.533 "base_bdevs_list": [ 00:17:51.533 { 00:17:51.533 "name": null, 00:17:51.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.533 "is_configured": false, 00:17:51.533 "data_offset": 0, 00:17:51.533 "data_size": 7936 00:17:51.533 }, 00:17:51.533 { 00:17:51.533 "name": "BaseBdev2", 00:17:51.533 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:51.533 "is_configured": true, 00:17:51.533 "data_offset": 256, 00:17:51.533 "data_size": 7936 00:17:51.533 } 00:17:51.533 ] 00:17:51.533 }' 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.533 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.103 "name": "raid_bdev1", 00:17:52.103 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:52.103 "strip_size_kb": 0, 00:17:52.103 "state": "online", 00:17:52.103 "raid_level": "raid1", 00:17:52.103 "superblock": true, 00:17:52.103 "num_base_bdevs": 2, 00:17:52.103 "num_base_bdevs_discovered": 1, 00:17:52.103 "num_base_bdevs_operational": 1, 00:17:52.103 "base_bdevs_list": [ 00:17:52.103 { 00:17:52.103 "name": null, 00:17:52.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.103 "is_configured": false, 00:17:52.103 "data_offset": 0, 00:17:52.103 "data_size": 7936 00:17:52.103 }, 00:17:52.103 { 00:17:52.103 "name": "BaseBdev2", 00:17:52.103 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:52.103 "is_configured": true, 00:17:52.103 "data_offset": 256, 00:17:52.103 "data_size": 7936 00:17:52.103 } 00:17:52.103 ] 00:17:52.103 }' 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.103 13:34:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.103 [2024-11-18 13:34:22.021603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.103 [2024-11-18 13:34:22.036764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.103 13:34:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:52.103 [2024-11-18 13:34:22.038509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.041 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.302 "name": "raid_bdev1", 00:17:53.302 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:53.302 "strip_size_kb": 0, 00:17:53.302 "state": "online", 00:17:53.302 "raid_level": "raid1", 00:17:53.302 "superblock": true, 00:17:53.302 "num_base_bdevs": 2, 00:17:53.302 "num_base_bdevs_discovered": 2, 00:17:53.302 "num_base_bdevs_operational": 2, 00:17:53.302 "process": { 00:17:53.302 "type": "rebuild", 00:17:53.302 "target": "spare", 00:17:53.302 "progress": { 00:17:53.302 "blocks": 2560, 00:17:53.302 "percent": 32 00:17:53.302 } 00:17:53.302 }, 00:17:53.302 "base_bdevs_list": [ 00:17:53.302 { 00:17:53.302 "name": "spare", 00:17:53.302 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:53.302 "is_configured": true, 00:17:53.302 "data_offset": 256, 00:17:53.302 "data_size": 7936 00:17:53.302 }, 00:17:53.302 { 00:17:53.302 "name": "BaseBdev2", 00:17:53.302 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:53.302 "is_configured": true, 00:17:53.302 "data_offset": 256, 00:17:53.302 "data_size": 7936 00:17:53.302 } 00:17:53.302 ] 00:17:53.302 }' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:53.302 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=677 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.302 "name": "raid_bdev1", 00:17:53.302 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:53.302 "strip_size_kb": 0, 00:17:53.302 "state": "online", 00:17:53.302 "raid_level": "raid1", 00:17:53.302 "superblock": true, 00:17:53.302 "num_base_bdevs": 2, 00:17:53.302 "num_base_bdevs_discovered": 2, 00:17:53.302 "num_base_bdevs_operational": 2, 00:17:53.302 "process": { 00:17:53.302 "type": "rebuild", 00:17:53.302 "target": "spare", 00:17:53.302 "progress": { 00:17:53.302 "blocks": 2816, 00:17:53.302 "percent": 35 00:17:53.302 } 00:17:53.302 }, 00:17:53.302 "base_bdevs_list": [ 00:17:53.302 { 00:17:53.302 "name": "spare", 00:17:53.302 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:53.302 "is_configured": true, 00:17:53.302 "data_offset": 256, 00:17:53.302 "data_size": 7936 00:17:53.302 }, 00:17:53.302 { 00:17:53.302 "name": "BaseBdev2", 00:17:53.302 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:53.302 "is_configured": true, 00:17:53.302 "data_offset": 256, 00:17:53.302 "data_size": 7936 00:17:53.302 } 00:17:53.302 ] 00:17:53.302 }' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.302 13:34:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.684 "name": "raid_bdev1", 00:17:54.684 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:54.684 "strip_size_kb": 0, 00:17:54.684 "state": "online", 00:17:54.684 "raid_level": "raid1", 00:17:54.684 "superblock": true, 00:17:54.684 "num_base_bdevs": 2, 00:17:54.684 "num_base_bdevs_discovered": 2, 00:17:54.684 "num_base_bdevs_operational": 2, 00:17:54.684 "process": { 00:17:54.684 "type": "rebuild", 00:17:54.684 "target": "spare", 00:17:54.684 "progress": { 00:17:54.684 "blocks": 5888, 00:17:54.684 "percent": 74 00:17:54.684 } 00:17:54.684 }, 00:17:54.684 "base_bdevs_list": [ 00:17:54.684 { 00:17:54.684 "name": "spare", 00:17:54.684 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:54.684 "is_configured": true, 00:17:54.684 "data_offset": 256, 00:17:54.684 "data_size": 7936 00:17:54.684 }, 00:17:54.684 { 00:17:54.684 "name": "BaseBdev2", 00:17:54.684 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:54.684 "is_configured": true, 00:17:54.684 "data_offset": 256, 00:17:54.684 "data_size": 7936 00:17:54.684 } 00:17:54.684 ] 00:17:54.684 }' 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.684 13:34:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.254 [2024-11-18 13:34:25.149804] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:55.254 [2024-11-18 13:34:25.149869] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:55.254 [2024-11-18 13:34:25.149958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.514 "name": "raid_bdev1", 00:17:55.514 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:55.514 "strip_size_kb": 0, 00:17:55.514 "state": "online", 00:17:55.514 "raid_level": "raid1", 00:17:55.514 "superblock": true, 00:17:55.514 "num_base_bdevs": 2, 00:17:55.514 "num_base_bdevs_discovered": 2, 00:17:55.514 "num_base_bdevs_operational": 2, 00:17:55.514 "base_bdevs_list": [ 00:17:55.514 { 00:17:55.514 "name": "spare", 00:17:55.514 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:55.514 "is_configured": true, 00:17:55.514 "data_offset": 256, 00:17:55.514 "data_size": 7936 00:17:55.514 }, 00:17:55.514 { 00:17:55.514 "name": "BaseBdev2", 00:17:55.514 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:55.514 "is_configured": true, 00:17:55.514 "data_offset": 256, 00:17:55.514 "data_size": 7936 00:17:55.514 } 00:17:55.514 ] 00:17:55.514 }' 00:17:55.514 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.784 "name": "raid_bdev1", 00:17:55.784 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:55.784 "strip_size_kb": 0, 00:17:55.784 "state": "online", 00:17:55.784 "raid_level": "raid1", 00:17:55.784 "superblock": true, 00:17:55.784 "num_base_bdevs": 2, 00:17:55.784 "num_base_bdevs_discovered": 2, 00:17:55.784 "num_base_bdevs_operational": 2, 00:17:55.784 "base_bdevs_list": [ 00:17:55.784 { 00:17:55.784 "name": "spare", 00:17:55.784 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:55.784 "is_configured": true, 00:17:55.784 "data_offset": 256, 00:17:55.784 "data_size": 7936 00:17:55.784 }, 00:17:55.784 { 00:17:55.784 "name": "BaseBdev2", 00:17:55.784 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:55.784 "is_configured": true, 00:17:55.784 "data_offset": 256, 00:17:55.784 "data_size": 7936 00:17:55.784 } 00:17:55.784 ] 00:17:55.784 }' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.784 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.785 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.785 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:55.785 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.055 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.055 "name": "raid_bdev1", 00:17:56.055 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:56.055 "strip_size_kb": 0, 00:17:56.055 "state": "online", 00:17:56.055 "raid_level": "raid1", 00:17:56.055 "superblock": true, 00:17:56.055 "num_base_bdevs": 2, 00:17:56.055 "num_base_bdevs_discovered": 2, 00:17:56.055 "num_base_bdevs_operational": 2, 00:17:56.055 "base_bdevs_list": [ 00:17:56.055 { 00:17:56.055 "name": "spare", 00:17:56.055 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:56.055 "is_configured": true, 00:17:56.055 "data_offset": 256, 00:17:56.055 "data_size": 7936 00:17:56.055 }, 00:17:56.055 { 00:17:56.055 "name": "BaseBdev2", 00:17:56.055 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:56.055 "is_configured": true, 00:17:56.055 "data_offset": 256, 00:17:56.055 "data_size": 7936 00:17:56.055 } 00:17:56.055 ] 00:17:56.055 }' 00:17:56.055 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.055 13:34:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.316 [2024-11-18 13:34:26.192613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.316 [2024-11-18 13:34:26.192730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.316 [2024-11-18 13:34:26.192818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.316 [2024-11-18 13:34:26.192896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.316 [2024-11-18 13:34:26.192964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.316 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:56.576 /dev/nbd0 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.576 1+0 records in 00:17:56.576 1+0 records out 00:17:56.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393078 s, 10.4 MB/s 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.576 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:56.836 /dev/nbd1 00:17:56.836 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:56.837 1+0 records in 00:17:56.837 1+0 records out 00:17:56.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427953 s, 9.6 MB/s 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.837 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.096 13:34:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.096 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.097 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.357 [2024-11-18 13:34:27.385480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:57.357 [2024-11-18 13:34:27.385534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.357 [2024-11-18 13:34:27.385554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.357 [2024-11-18 13:34:27.385563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.357 [2024-11-18 13:34:27.387577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.357 [2024-11-18 13:34:27.387678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:57.357 [2024-11-18 13:34:27.387768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:57.357 [2024-11-18 13:34:27.387832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.357 [2024-11-18 13:34:27.387990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.357 spare 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.357 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.617 [2024-11-18 13:34:27.487882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:57.617 [2024-11-18 13:34:27.487911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.617 [2024-11-18 13:34:27.488158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:57.617 [2024-11-18 13:34:27.488331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:57.617 [2024-11-18 13:34:27.488341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:57.617 [2024-11-18 13:34:27.488488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.617 "name": "raid_bdev1", 00:17:57.617 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:57.617 "strip_size_kb": 0, 00:17:57.617 "state": "online", 00:17:57.617 "raid_level": "raid1", 00:17:57.617 "superblock": true, 00:17:57.617 "num_base_bdevs": 2, 00:17:57.617 "num_base_bdevs_discovered": 2, 00:17:57.617 "num_base_bdevs_operational": 2, 00:17:57.617 "base_bdevs_list": [ 00:17:57.617 { 00:17:57.617 "name": "spare", 00:17:57.617 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:57.617 "is_configured": true, 00:17:57.617 "data_offset": 256, 00:17:57.617 "data_size": 7936 00:17:57.617 }, 00:17:57.617 { 00:17:57.617 "name": "BaseBdev2", 00:17:57.617 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:57.617 "is_configured": true, 00:17:57.617 "data_offset": 256, 00:17:57.617 "data_size": 7936 00:17:57.617 } 00:17:57.617 ] 00:17:57.617 }' 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.617 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.186 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.187 13:34:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.187 "name": "raid_bdev1", 00:17:58.187 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:58.187 "strip_size_kb": 0, 00:17:58.187 "state": "online", 00:17:58.187 "raid_level": "raid1", 00:17:58.187 "superblock": true, 00:17:58.187 "num_base_bdevs": 2, 00:17:58.187 "num_base_bdevs_discovered": 2, 00:17:58.187 "num_base_bdevs_operational": 2, 00:17:58.187 "base_bdevs_list": [ 00:17:58.187 { 00:17:58.187 "name": "spare", 00:17:58.187 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:58.187 "is_configured": true, 00:17:58.187 "data_offset": 256, 00:17:58.187 "data_size": 7936 00:17:58.187 }, 00:17:58.187 { 00:17:58.187 "name": "BaseBdev2", 00:17:58.187 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:58.187 "is_configured": true, 00:17:58.187 "data_offset": 256, 00:17:58.187 "data_size": 7936 00:17:58.187 } 00:17:58.187 ] 00:17:58.187 }' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.187 [2024-11-18 13:34:28.152245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.187 "name": "raid_bdev1", 00:17:58.187 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:58.187 "strip_size_kb": 0, 00:17:58.187 "state": "online", 00:17:58.187 "raid_level": "raid1", 00:17:58.187 "superblock": true, 00:17:58.187 "num_base_bdevs": 2, 00:17:58.187 "num_base_bdevs_discovered": 1, 00:17:58.187 "num_base_bdevs_operational": 1, 00:17:58.187 "base_bdevs_list": [ 00:17:58.187 { 00:17:58.187 "name": null, 00:17:58.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.187 "is_configured": false, 00:17:58.187 "data_offset": 0, 00:17:58.187 "data_size": 7936 00:17:58.187 }, 00:17:58.187 { 00:17:58.187 "name": "BaseBdev2", 00:17:58.187 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:58.187 "is_configured": true, 00:17:58.187 "data_offset": 256, 00:17:58.187 "data_size": 7936 00:17:58.187 } 00:17:58.187 ] 00:17:58.187 }' 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.187 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.757 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.757 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.757 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.757 [2024-11-18 13:34:28.579552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.757 [2024-11-18 13:34:28.579701] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:58.757 [2024-11-18 13:34:28.579717] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:58.757 [2024-11-18 13:34:28.579752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.757 [2024-11-18 13:34:28.595289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:58.757 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.757 13:34:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:58.757 [2024-11-18 13:34:28.597017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.695 "name": "raid_bdev1", 00:17:59.695 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:59.695 "strip_size_kb": 0, 00:17:59.695 "state": "online", 00:17:59.695 "raid_level": "raid1", 00:17:59.695 "superblock": true, 00:17:59.695 "num_base_bdevs": 2, 00:17:59.695 "num_base_bdevs_discovered": 2, 00:17:59.695 "num_base_bdevs_operational": 2, 00:17:59.695 "process": { 00:17:59.695 "type": "rebuild", 00:17:59.695 "target": "spare", 00:17:59.695 "progress": { 00:17:59.695 "blocks": 2560, 00:17:59.695 "percent": 32 00:17:59.695 } 00:17:59.695 }, 00:17:59.695 "base_bdevs_list": [ 00:17:59.695 { 00:17:59.695 "name": "spare", 00:17:59.695 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:17:59.695 "is_configured": true, 00:17:59.695 "data_offset": 256, 00:17:59.695 "data_size": 7936 00:17:59.695 }, 00:17:59.695 { 00:17:59.695 "name": "BaseBdev2", 00:17:59.695 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:59.695 "is_configured": true, 00:17:59.695 "data_offset": 256, 00:17:59.695 "data_size": 7936 00:17:59.695 } 00:17:59.695 ] 00:17:59.695 }' 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.695 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 [2024-11-18 13:34:29.736662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.955 [2024-11-18 13:34:29.801517] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:59.955 [2024-11-18 13:34:29.801622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.955 [2024-11-18 13:34:29.801652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.955 [2024-11-18 13:34:29.801674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.955 "name": "raid_bdev1", 00:17:59.955 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:17:59.955 "strip_size_kb": 0, 00:17:59.955 "state": "online", 00:17:59.955 "raid_level": "raid1", 00:17:59.955 "superblock": true, 00:17:59.955 "num_base_bdevs": 2, 00:17:59.955 "num_base_bdevs_discovered": 1, 00:17:59.955 "num_base_bdevs_operational": 1, 00:17:59.955 "base_bdevs_list": [ 00:17:59.955 { 00:17:59.955 "name": null, 00:17:59.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.955 "is_configured": false, 00:17:59.955 "data_offset": 0, 00:17:59.955 "data_size": 7936 00:17:59.955 }, 00:17:59.955 { 00:17:59.955 "name": "BaseBdev2", 00:17:59.955 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:17:59.955 "is_configured": true, 00:17:59.955 "data_offset": 256, 00:17:59.955 "data_size": 7936 00:17:59.955 } 00:17:59.955 ] 00:17:59.955 }' 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.955 13:34:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.524 13:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:00.524 13:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.524 13:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.524 [2024-11-18 13:34:30.297637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.524 [2024-11-18 13:34:30.297752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.524 [2024-11-18 13:34:30.297788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:00.524 [2024-11-18 13:34:30.297818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.524 [2024-11-18 13:34:30.298279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.524 [2024-11-18 13:34:30.298342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.524 [2024-11-18 13:34:30.298446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:00.524 [2024-11-18 13:34:30.298488] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.524 [2024-11-18 13:34:30.298546] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:00.524 [2024-11-18 13:34:30.298593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.524 [2024-11-18 13:34:30.314227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:00.524 spare 00:18:00.524 13:34:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.524 [2024-11-18 13:34:30.316053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.524 13:34:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.464 "name": "raid_bdev1", 00:18:01.464 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:01.464 "strip_size_kb": 0, 00:18:01.464 "state": "online", 00:18:01.464 "raid_level": "raid1", 00:18:01.464 "superblock": true, 00:18:01.464 "num_base_bdevs": 2, 00:18:01.464 "num_base_bdevs_discovered": 2, 00:18:01.464 "num_base_bdevs_operational": 2, 00:18:01.464 "process": { 00:18:01.464 "type": "rebuild", 00:18:01.464 "target": "spare", 00:18:01.464 "progress": { 00:18:01.464 "blocks": 2560, 00:18:01.464 "percent": 32 00:18:01.464 } 00:18:01.464 }, 00:18:01.464 "base_bdevs_list": [ 00:18:01.464 { 00:18:01.464 "name": "spare", 00:18:01.464 "uuid": "aaa13c68-b55c-5853-baf6-4db1a3e60c00", 00:18:01.464 "is_configured": true, 00:18:01.464 "data_offset": 256, 00:18:01.464 "data_size": 7936 00:18:01.464 }, 00:18:01.464 { 00:18:01.464 "name": "BaseBdev2", 00:18:01.464 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:01.464 "is_configured": true, 00:18:01.464 "data_offset": 256, 00:18:01.464 "data_size": 7936 00:18:01.464 } 00:18:01.464 ] 00:18:01.464 }' 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.464 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.464 [2024-11-18 13:34:31.483657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.724 [2024-11-18 13:34:31.520575] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:01.724 [2024-11-18 13:34:31.520639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.724 [2024-11-18 13:34:31.520656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.724 [2024-11-18 13:34:31.520663] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.724 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.724 "name": "raid_bdev1", 00:18:01.725 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:01.725 "strip_size_kb": 0, 00:18:01.725 "state": "online", 00:18:01.725 "raid_level": "raid1", 00:18:01.725 "superblock": true, 00:18:01.725 "num_base_bdevs": 2, 00:18:01.725 "num_base_bdevs_discovered": 1, 00:18:01.725 "num_base_bdevs_operational": 1, 00:18:01.725 "base_bdevs_list": [ 00:18:01.725 { 00:18:01.725 "name": null, 00:18:01.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.725 "is_configured": false, 00:18:01.725 "data_offset": 0, 00:18:01.725 "data_size": 7936 00:18:01.725 }, 00:18:01.725 { 00:18:01.725 "name": "BaseBdev2", 00:18:01.725 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:01.725 "is_configured": true, 00:18:01.725 "data_offset": 256, 00:18:01.725 "data_size": 7936 00:18:01.725 } 00:18:01.725 ] 00:18:01.725 }' 00:18:01.725 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.725 13:34:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.984 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.984 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.984 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.985 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.245 "name": "raid_bdev1", 00:18:02.245 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:02.245 "strip_size_kb": 0, 00:18:02.245 "state": "online", 00:18:02.245 "raid_level": "raid1", 00:18:02.245 "superblock": true, 00:18:02.245 "num_base_bdevs": 2, 00:18:02.245 "num_base_bdevs_discovered": 1, 00:18:02.245 "num_base_bdevs_operational": 1, 00:18:02.245 "base_bdevs_list": [ 00:18:02.245 { 00:18:02.245 "name": null, 00:18:02.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.245 "is_configured": false, 00:18:02.245 "data_offset": 0, 00:18:02.245 "data_size": 7936 00:18:02.245 }, 00:18:02.245 { 00:18:02.245 "name": "BaseBdev2", 00:18:02.245 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:02.245 "is_configured": true, 00:18:02.245 "data_offset": 256, 00:18:02.245 "data_size": 7936 00:18:02.245 } 00:18:02.245 ] 00:18:02.245 }' 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.245 [2024-11-18 13:34:32.181267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.245 [2024-11-18 13:34:32.181319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.245 [2024-11-18 13:34:32.181339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:02.245 [2024-11-18 13:34:32.181357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.245 [2024-11-18 13:34:32.181763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.245 [2024-11-18 13:34:32.181780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.245 [2024-11-18 13:34:32.181849] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:02.245 [2024-11-18 13:34:32.181862] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:02.245 [2024-11-18 13:34:32.181873] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:02.245 [2024-11-18 13:34:32.181882] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:02.245 BaseBdev1 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.245 13:34:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:03.187 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.187 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.187 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.187 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.188 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.447 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.447 "name": "raid_bdev1", 00:18:03.447 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:03.447 "strip_size_kb": 0, 00:18:03.447 "state": "online", 00:18:03.447 "raid_level": "raid1", 00:18:03.447 "superblock": true, 00:18:03.447 "num_base_bdevs": 2, 00:18:03.447 "num_base_bdevs_discovered": 1, 00:18:03.447 "num_base_bdevs_operational": 1, 00:18:03.447 "base_bdevs_list": [ 00:18:03.447 { 00:18:03.447 "name": null, 00:18:03.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.447 "is_configured": false, 00:18:03.447 "data_offset": 0, 00:18:03.447 "data_size": 7936 00:18:03.447 }, 00:18:03.447 { 00:18:03.447 "name": "BaseBdev2", 00:18:03.447 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:03.447 "is_configured": true, 00:18:03.447 "data_offset": 256, 00:18:03.447 "data_size": 7936 00:18:03.447 } 00:18:03.447 ] 00:18:03.447 }' 00:18:03.447 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.447 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.708 "name": "raid_bdev1", 00:18:03.708 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:03.708 "strip_size_kb": 0, 00:18:03.708 "state": "online", 00:18:03.708 "raid_level": "raid1", 00:18:03.708 "superblock": true, 00:18:03.708 "num_base_bdevs": 2, 00:18:03.708 "num_base_bdevs_discovered": 1, 00:18:03.708 "num_base_bdevs_operational": 1, 00:18:03.708 "base_bdevs_list": [ 00:18:03.708 { 00:18:03.708 "name": null, 00:18:03.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.708 "is_configured": false, 00:18:03.708 "data_offset": 0, 00:18:03.708 "data_size": 7936 00:18:03.708 }, 00:18:03.708 { 00:18:03.708 "name": "BaseBdev2", 00:18:03.708 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:03.708 "is_configured": true, 00:18:03.708 "data_offset": 256, 00:18:03.708 "data_size": 7936 00:18:03.708 } 00:18:03.708 ] 00:18:03.708 }' 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.708 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.968 [2024-11-18 13:34:33.766695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.968 [2024-11-18 13:34:33.766828] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.968 [2024-11-18 13:34:33.766844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:03.968 request: 00:18:03.968 { 00:18:03.968 "base_bdev": "BaseBdev1", 00:18:03.968 "raid_bdev": "raid_bdev1", 00:18:03.968 "method": "bdev_raid_add_base_bdev", 00:18:03.968 "req_id": 1 00:18:03.968 } 00:18:03.968 Got JSON-RPC error response 00:18:03.968 response: 00:18:03.968 { 00:18:03.968 "code": -22, 00:18:03.968 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:03.968 } 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.968 13:34:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.908 "name": "raid_bdev1", 00:18:04.908 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:04.908 "strip_size_kb": 0, 00:18:04.908 "state": "online", 00:18:04.908 "raid_level": "raid1", 00:18:04.908 "superblock": true, 00:18:04.908 "num_base_bdevs": 2, 00:18:04.908 "num_base_bdevs_discovered": 1, 00:18:04.908 "num_base_bdevs_operational": 1, 00:18:04.908 "base_bdevs_list": [ 00:18:04.908 { 00:18:04.908 "name": null, 00:18:04.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.908 "is_configured": false, 00:18:04.908 "data_offset": 0, 00:18:04.908 "data_size": 7936 00:18:04.908 }, 00:18:04.908 { 00:18:04.908 "name": "BaseBdev2", 00:18:04.908 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:04.908 "is_configured": true, 00:18:04.908 "data_offset": 256, 00:18:04.908 "data_size": 7936 00:18:04.908 } 00:18:04.908 ] 00:18:04.908 }' 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.908 13:34:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.167 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.168 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.168 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.428 "name": "raid_bdev1", 00:18:05.428 "uuid": "e29d341d-80ab-41b0-b074-3240ad1aa0e1", 00:18:05.428 "strip_size_kb": 0, 00:18:05.428 "state": "online", 00:18:05.428 "raid_level": "raid1", 00:18:05.428 "superblock": true, 00:18:05.428 "num_base_bdevs": 2, 00:18:05.428 "num_base_bdevs_discovered": 1, 00:18:05.428 "num_base_bdevs_operational": 1, 00:18:05.428 "base_bdevs_list": [ 00:18:05.428 { 00:18:05.428 "name": null, 00:18:05.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.428 "is_configured": false, 00:18:05.428 "data_offset": 0, 00:18:05.428 "data_size": 7936 00:18:05.428 }, 00:18:05.428 { 00:18:05.428 "name": "BaseBdev2", 00:18:05.428 "uuid": "64f5d3f8-8cb3-5ede-bd67-1b535bcfffbc", 00:18:05.428 "is_configured": true, 00:18:05.428 "data_offset": 256, 00:18:05.428 "data_size": 7936 00:18:05.428 } 00:18:05.428 ] 00:18:05.428 }' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86443 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86443 ']' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86443 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86443 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86443' 00:18:05.428 killing process with pid 86443 00:18:05.428 Received shutdown signal, test time was about 60.000000 seconds 00:18:05.428 00:18:05.428 Latency(us) 00:18:05.428 [2024-11-18T13:34:35.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.428 [2024-11-18T13:34:35.482Z] =================================================================================================================== 00:18:05.428 [2024-11-18T13:34:35.482Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86443 00:18:05.428 [2024-11-18 13:34:35.382754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.428 [2024-11-18 13:34:35.382862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.428 [2024-11-18 13:34:35.382931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.428 [2024-11-18 13:34:35.382943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:05.428 13:34:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86443 00:18:05.689 [2024-11-18 13:34:35.662711] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.630 13:34:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:06.630 00:18:06.630 real 0m19.738s 00:18:06.630 user 0m25.800s 00:18:06.630 sys 0m2.695s 00:18:06.630 13:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.630 13:34:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.630 ************************************ 00:18:06.630 END TEST raid_rebuild_test_sb_4k 00:18:06.630 ************************************ 00:18:06.891 13:34:36 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:06.891 13:34:36 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:06.891 13:34:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:06.891 13:34:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.891 13:34:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.891 ************************************ 00:18:06.891 START TEST raid_state_function_test_sb_md_separate 00:18:06.891 ************************************ 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87135 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87135' 00:18:06.891 Process raid pid: 87135 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87135 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87135 ']' 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.891 13:34:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.891 [2024-11-18 13:34:36.843570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:06.892 [2024-11-18 13:34:36.843675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.152 [2024-11-18 13:34:37.018835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.152 [2024-11-18 13:34:37.123759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.412 [2024-11-18 13:34:37.322768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.412 [2024-11-18 13:34:37.322799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.671 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.671 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:07.671 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:07.671 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.672 [2024-11-18 13:34:37.659294] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.672 [2024-11-18 13:34:37.659349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.672 [2024-11-18 13:34:37.659358] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.672 [2024-11-18 13:34:37.659368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.672 "name": "Existed_Raid", 00:18:07.672 "uuid": "3ac29fa9-a431-4f5a-95cd-612269fe495d", 00:18:07.672 "strip_size_kb": 0, 00:18:07.672 "state": "configuring", 00:18:07.672 "raid_level": "raid1", 00:18:07.672 "superblock": true, 00:18:07.672 "num_base_bdevs": 2, 00:18:07.672 "num_base_bdevs_discovered": 0, 00:18:07.672 "num_base_bdevs_operational": 2, 00:18:07.672 "base_bdevs_list": [ 00:18:07.672 { 00:18:07.672 "name": "BaseBdev1", 00:18:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.672 "is_configured": false, 00:18:07.672 "data_offset": 0, 00:18:07.672 "data_size": 0 00:18:07.672 }, 00:18:07.672 { 00:18:07.672 "name": "BaseBdev2", 00:18:07.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.672 "is_configured": false, 00:18:07.672 "data_offset": 0, 00:18:07.672 "data_size": 0 00:18:07.672 } 00:18:07.672 ] 00:18:07.672 }' 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.672 13:34:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 [2024-11-18 13:34:38.138481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.242 [2024-11-18 13:34:38.138515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 [2024-11-18 13:34:38.146471] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.242 [2024-11-18 13:34:38.146512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.242 [2024-11-18 13:34:38.146521] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.242 [2024-11-18 13:34:38.146532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 [2024-11-18 13:34:38.184580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.242 BaseBdev1 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 [ 00:18:08.242 { 00:18:08.242 "name": "BaseBdev1", 00:18:08.242 "aliases": [ 00:18:08.242 "9599bf76-68e3-42d7-85d1-78151d98ce85" 00:18:08.242 ], 00:18:08.242 "product_name": "Malloc disk", 00:18:08.242 "block_size": 4096, 00:18:08.242 "num_blocks": 8192, 00:18:08.242 "uuid": "9599bf76-68e3-42d7-85d1-78151d98ce85", 00:18:08.242 "md_size": 32, 00:18:08.242 "md_interleave": false, 00:18:08.242 "dif_type": 0, 00:18:08.242 "assigned_rate_limits": { 00:18:08.242 "rw_ios_per_sec": 0, 00:18:08.242 "rw_mbytes_per_sec": 0, 00:18:08.242 "r_mbytes_per_sec": 0, 00:18:08.242 "w_mbytes_per_sec": 0 00:18:08.242 }, 00:18:08.242 "claimed": true, 00:18:08.242 "claim_type": "exclusive_write", 00:18:08.242 "zoned": false, 00:18:08.242 "supported_io_types": { 00:18:08.242 "read": true, 00:18:08.242 "write": true, 00:18:08.242 "unmap": true, 00:18:08.242 "flush": true, 00:18:08.242 "reset": true, 00:18:08.242 "nvme_admin": false, 00:18:08.242 "nvme_io": false, 00:18:08.242 "nvme_io_md": false, 00:18:08.242 "write_zeroes": true, 00:18:08.242 "zcopy": true, 00:18:08.242 "get_zone_info": false, 00:18:08.242 "zone_management": false, 00:18:08.242 "zone_append": false, 00:18:08.242 "compare": false, 00:18:08.242 "compare_and_write": false, 00:18:08.242 "abort": true, 00:18:08.242 "seek_hole": false, 00:18:08.242 "seek_data": false, 00:18:08.242 "copy": true, 00:18:08.242 "nvme_iov_md": false 00:18:08.242 }, 00:18:08.242 "memory_domains": [ 00:18:08.242 { 00:18:08.242 "dma_device_id": "system", 00:18:08.242 "dma_device_type": 1 00:18:08.242 }, 00:18:08.242 { 00:18:08.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.242 "dma_device_type": 2 00:18:08.242 } 00:18:08.242 ], 00:18:08.242 "driver_specific": {} 00:18:08.242 } 00:18:08.242 ] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.242 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.242 "name": "Existed_Raid", 00:18:08.242 "uuid": "140f5b86-d8e4-4828-9e0d-eea7920e913b", 00:18:08.242 "strip_size_kb": 0, 00:18:08.242 "state": "configuring", 00:18:08.242 "raid_level": "raid1", 00:18:08.242 "superblock": true, 00:18:08.243 "num_base_bdevs": 2, 00:18:08.243 "num_base_bdevs_discovered": 1, 00:18:08.243 "num_base_bdevs_operational": 2, 00:18:08.243 "base_bdevs_list": [ 00:18:08.243 { 00:18:08.243 "name": "BaseBdev1", 00:18:08.243 "uuid": "9599bf76-68e3-42d7-85d1-78151d98ce85", 00:18:08.243 "is_configured": true, 00:18:08.243 "data_offset": 256, 00:18:08.243 "data_size": 7936 00:18:08.243 }, 00:18:08.243 { 00:18:08.243 "name": "BaseBdev2", 00:18:08.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.243 "is_configured": false, 00:18:08.243 "data_offset": 0, 00:18:08.243 "data_size": 0 00:18:08.243 } 00:18:08.243 ] 00:18:08.243 }' 00:18:08.243 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.243 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 [2024-11-18 13:34:38.647867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.813 [2024-11-18 13:34:38.647908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 [2024-11-18 13:34:38.659886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.813 [2024-11-18 13:34:38.661590] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.813 [2024-11-18 13:34:38.661628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.813 "name": "Existed_Raid", 00:18:08.813 "uuid": "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2", 00:18:08.813 "strip_size_kb": 0, 00:18:08.813 "state": "configuring", 00:18:08.813 "raid_level": "raid1", 00:18:08.813 "superblock": true, 00:18:08.813 "num_base_bdevs": 2, 00:18:08.813 "num_base_bdevs_discovered": 1, 00:18:08.813 "num_base_bdevs_operational": 2, 00:18:08.813 "base_bdevs_list": [ 00:18:08.813 { 00:18:08.813 "name": "BaseBdev1", 00:18:08.813 "uuid": "9599bf76-68e3-42d7-85d1-78151d98ce85", 00:18:08.813 "is_configured": true, 00:18:08.813 "data_offset": 256, 00:18:08.813 "data_size": 7936 00:18:08.813 }, 00:18:08.813 { 00:18:08.813 "name": "BaseBdev2", 00:18:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.813 "is_configured": false, 00:18:08.813 "data_offset": 0, 00:18:08.813 "data_size": 0 00:18:08.813 } 00:18:08.813 ] 00:18:08.813 }' 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.813 13:34:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.074 [2024-11-18 13:34:39.108442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.074 [2024-11-18 13:34:39.108653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:09.074 [2024-11-18 13:34:39.108667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:09.074 [2024-11-18 13:34:39.108746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:09.074 [2024-11-18 13:34:39.108859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:09.074 [2024-11-18 13:34:39.108880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:09.074 [2024-11-18 13:34:39.108976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.074 BaseBdev2 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.074 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.335 [ 00:18:09.335 { 00:18:09.335 "name": "BaseBdev2", 00:18:09.335 "aliases": [ 00:18:09.335 "834b099e-7f47-4904-8128-38b7e51554e2" 00:18:09.335 ], 00:18:09.335 "product_name": "Malloc disk", 00:18:09.335 "block_size": 4096, 00:18:09.335 "num_blocks": 8192, 00:18:09.335 "uuid": "834b099e-7f47-4904-8128-38b7e51554e2", 00:18:09.335 "md_size": 32, 00:18:09.335 "md_interleave": false, 00:18:09.335 "dif_type": 0, 00:18:09.335 "assigned_rate_limits": { 00:18:09.335 "rw_ios_per_sec": 0, 00:18:09.335 "rw_mbytes_per_sec": 0, 00:18:09.335 "r_mbytes_per_sec": 0, 00:18:09.335 "w_mbytes_per_sec": 0 00:18:09.335 }, 00:18:09.335 "claimed": true, 00:18:09.335 "claim_type": "exclusive_write", 00:18:09.335 "zoned": false, 00:18:09.335 "supported_io_types": { 00:18:09.335 "read": true, 00:18:09.335 "write": true, 00:18:09.335 "unmap": true, 00:18:09.335 "flush": true, 00:18:09.335 "reset": true, 00:18:09.335 "nvme_admin": false, 00:18:09.335 "nvme_io": false, 00:18:09.335 "nvme_io_md": false, 00:18:09.335 "write_zeroes": true, 00:18:09.335 "zcopy": true, 00:18:09.335 "get_zone_info": false, 00:18:09.335 "zone_management": false, 00:18:09.335 "zone_append": false, 00:18:09.335 "compare": false, 00:18:09.335 "compare_and_write": false, 00:18:09.335 "abort": true, 00:18:09.335 "seek_hole": false, 00:18:09.335 "seek_data": false, 00:18:09.335 "copy": true, 00:18:09.335 "nvme_iov_md": false 00:18:09.335 }, 00:18:09.335 "memory_domains": [ 00:18:09.335 { 00:18:09.335 "dma_device_id": "system", 00:18:09.335 "dma_device_type": 1 00:18:09.335 }, 00:18:09.335 { 00:18:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.335 "dma_device_type": 2 00:18:09.335 } 00:18:09.335 ], 00:18:09.335 "driver_specific": {} 00:18:09.335 } 00:18:09.335 ] 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.335 "name": "Existed_Raid", 00:18:09.335 "uuid": "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2", 00:18:09.335 "strip_size_kb": 0, 00:18:09.335 "state": "online", 00:18:09.335 "raid_level": "raid1", 00:18:09.335 "superblock": true, 00:18:09.335 "num_base_bdevs": 2, 00:18:09.335 "num_base_bdevs_discovered": 2, 00:18:09.335 "num_base_bdevs_operational": 2, 00:18:09.335 "base_bdevs_list": [ 00:18:09.335 { 00:18:09.335 "name": "BaseBdev1", 00:18:09.335 "uuid": "9599bf76-68e3-42d7-85d1-78151d98ce85", 00:18:09.335 "is_configured": true, 00:18:09.335 "data_offset": 256, 00:18:09.335 "data_size": 7936 00:18:09.335 }, 00:18:09.335 { 00:18:09.335 "name": "BaseBdev2", 00:18:09.335 "uuid": "834b099e-7f47-4904-8128-38b7e51554e2", 00:18:09.335 "is_configured": true, 00:18:09.335 "data_offset": 256, 00:18:09.335 "data_size": 7936 00:18:09.335 } 00:18:09.335 ] 00:18:09.335 }' 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.335 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.596 [2024-11-18 13:34:39.603876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.596 "name": "Existed_Raid", 00:18:09.596 "aliases": [ 00:18:09.596 "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2" 00:18:09.596 ], 00:18:09.596 "product_name": "Raid Volume", 00:18:09.596 "block_size": 4096, 00:18:09.596 "num_blocks": 7936, 00:18:09.596 "uuid": "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2", 00:18:09.596 "md_size": 32, 00:18:09.596 "md_interleave": false, 00:18:09.596 "dif_type": 0, 00:18:09.596 "assigned_rate_limits": { 00:18:09.596 "rw_ios_per_sec": 0, 00:18:09.596 "rw_mbytes_per_sec": 0, 00:18:09.596 "r_mbytes_per_sec": 0, 00:18:09.596 "w_mbytes_per_sec": 0 00:18:09.596 }, 00:18:09.596 "claimed": false, 00:18:09.596 "zoned": false, 00:18:09.596 "supported_io_types": { 00:18:09.596 "read": true, 00:18:09.596 "write": true, 00:18:09.596 "unmap": false, 00:18:09.596 "flush": false, 00:18:09.596 "reset": true, 00:18:09.596 "nvme_admin": false, 00:18:09.596 "nvme_io": false, 00:18:09.596 "nvme_io_md": false, 00:18:09.596 "write_zeroes": true, 00:18:09.596 "zcopy": false, 00:18:09.596 "get_zone_info": false, 00:18:09.596 "zone_management": false, 00:18:09.596 "zone_append": false, 00:18:09.596 "compare": false, 00:18:09.596 "compare_and_write": false, 00:18:09.596 "abort": false, 00:18:09.596 "seek_hole": false, 00:18:09.596 "seek_data": false, 00:18:09.596 "copy": false, 00:18:09.596 "nvme_iov_md": false 00:18:09.596 }, 00:18:09.596 "memory_domains": [ 00:18:09.596 { 00:18:09.596 "dma_device_id": "system", 00:18:09.596 "dma_device_type": 1 00:18:09.596 }, 00:18:09.596 { 00:18:09.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.596 "dma_device_type": 2 00:18:09.596 }, 00:18:09.596 { 00:18:09.596 "dma_device_id": "system", 00:18:09.596 "dma_device_type": 1 00:18:09.596 }, 00:18:09.596 { 00:18:09.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.596 "dma_device_type": 2 00:18:09.596 } 00:18:09.596 ], 00:18:09.596 "driver_specific": { 00:18:09.596 "raid": { 00:18:09.596 "uuid": "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2", 00:18:09.596 "strip_size_kb": 0, 00:18:09.596 "state": "online", 00:18:09.596 "raid_level": "raid1", 00:18:09.596 "superblock": true, 00:18:09.596 "num_base_bdevs": 2, 00:18:09.596 "num_base_bdevs_discovered": 2, 00:18:09.596 "num_base_bdevs_operational": 2, 00:18:09.596 "base_bdevs_list": [ 00:18:09.596 { 00:18:09.596 "name": "BaseBdev1", 00:18:09.596 "uuid": "9599bf76-68e3-42d7-85d1-78151d98ce85", 00:18:09.596 "is_configured": true, 00:18:09.596 "data_offset": 256, 00:18:09.596 "data_size": 7936 00:18:09.596 }, 00:18:09.596 { 00:18:09.596 "name": "BaseBdev2", 00:18:09.596 "uuid": "834b099e-7f47-4904-8128-38b7e51554e2", 00:18:09.596 "is_configured": true, 00:18:09.596 "data_offset": 256, 00:18:09.596 "data_size": 7936 00:18:09.596 } 00:18:09.596 ] 00:18:09.596 } 00:18:09.596 } 00:18:09.596 }' 00:18:09.596 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:09.857 BaseBdev2' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.857 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.857 [2024-11-18 13:34:39.815288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.116 "name": "Existed_Raid", 00:18:10.116 "uuid": "1bf6e84e-cea6-499c-8df7-f8ca41f5aff2", 00:18:10.116 "strip_size_kb": 0, 00:18:10.116 "state": "online", 00:18:10.116 "raid_level": "raid1", 00:18:10.116 "superblock": true, 00:18:10.116 "num_base_bdevs": 2, 00:18:10.116 "num_base_bdevs_discovered": 1, 00:18:10.116 "num_base_bdevs_operational": 1, 00:18:10.116 "base_bdevs_list": [ 00:18:10.116 { 00:18:10.116 "name": null, 00:18:10.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.116 "is_configured": false, 00:18:10.116 "data_offset": 0, 00:18:10.116 "data_size": 7936 00:18:10.116 }, 00:18:10.116 { 00:18:10.116 "name": "BaseBdev2", 00:18:10.116 "uuid": "834b099e-7f47-4904-8128-38b7e51554e2", 00:18:10.116 "is_configured": true, 00:18:10.116 "data_offset": 256, 00:18:10.116 "data_size": 7936 00:18:10.116 } 00:18:10.116 ] 00:18:10.116 }' 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.116 13:34:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.375 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:10.375 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.375 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.375 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:10.375 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.376 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.376 [2024-11-18 13:34:40.422994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.376 [2024-11-18 13:34:40.423098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.636 [2024-11-18 13:34:40.518916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.636 [2024-11-18 13:34:40.518987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.636 [2024-11-18 13:34:40.518999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:10.636 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87135 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87135 ']' 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87135 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87135 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.637 killing process with pid 87135 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87135' 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87135 00:18:10.637 [2024-11-18 13:34:40.613501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.637 13:34:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87135 00:18:10.637 [2024-11-18 13:34:40.629279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.020 13:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:12.020 00:18:12.020 real 0m4.917s 00:18:12.020 user 0m7.106s 00:18:12.020 sys 0m0.884s 00:18:12.020 13:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.020 13:34:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 ************************************ 00:18:12.020 END TEST raid_state_function_test_sb_md_separate 00:18:12.020 ************************************ 00:18:12.020 13:34:41 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:12.020 13:34:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:12.020 13:34:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.020 13:34:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 ************************************ 00:18:12.020 START TEST raid_superblock_test_md_separate 00:18:12.020 ************************************ 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87384 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87384 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87384 ']' 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.020 13:34:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.020 [2024-11-18 13:34:41.833785] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:12.021 [2024-11-18 13:34:41.833895] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87384 ] 00:18:12.021 [2024-11-18 13:34:42.005729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.280 [2024-11-18 13:34:42.108145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.280 [2024-11-18 13:34:42.275598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.280 [2024-11-18 13:34:42.275652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.849 malloc1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.849 [2024-11-18 13:34:42.695433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:12.849 [2024-11-18 13:34:42.695492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.849 [2024-11-18 13:34:42.695514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:12.849 [2024-11-18 13:34:42.695523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.849 [2024-11-18 13:34:42.697284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.849 [2024-11-18 13:34:42.697319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:12.849 pt1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.849 malloc2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.849 [2024-11-18 13:34:42.752972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.849 [2024-11-18 13:34:42.753021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.849 [2024-11-18 13:34:42.753040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:12.849 [2024-11-18 13:34:42.753048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.849 [2024-11-18 13:34:42.754719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.849 [2024-11-18 13:34:42.754751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.849 pt2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.849 [2024-11-18 13:34:42.764983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:12.849 [2024-11-18 13:34:42.766614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.849 [2024-11-18 13:34:42.766785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.849 [2024-11-18 13:34:42.766800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:12.849 [2024-11-18 13:34:42.766872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:12.849 [2024-11-18 13:34:42.767007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.849 [2024-11-18 13:34:42.767024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.849 [2024-11-18 13:34:42.767148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.849 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.850 "name": "raid_bdev1", 00:18:12.850 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:12.850 "strip_size_kb": 0, 00:18:12.850 "state": "online", 00:18:12.850 "raid_level": "raid1", 00:18:12.850 "superblock": true, 00:18:12.850 "num_base_bdevs": 2, 00:18:12.850 "num_base_bdevs_discovered": 2, 00:18:12.850 "num_base_bdevs_operational": 2, 00:18:12.850 "base_bdevs_list": [ 00:18:12.850 { 00:18:12.850 "name": "pt1", 00:18:12.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.850 "is_configured": true, 00:18:12.850 "data_offset": 256, 00:18:12.850 "data_size": 7936 00:18:12.850 }, 00:18:12.850 { 00:18:12.850 "name": "pt2", 00:18:12.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.850 "is_configured": true, 00:18:12.850 "data_offset": 256, 00:18:12.850 "data_size": 7936 00:18:12.850 } 00:18:12.850 ] 00:18:12.850 }' 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.850 13:34:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:13.419 [2024-11-18 13:34:43.272400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.419 "name": "raid_bdev1", 00:18:13.419 "aliases": [ 00:18:13.419 "3d589c23-572f-4012-91bc-ce79ab406619" 00:18:13.419 ], 00:18:13.419 "product_name": "Raid Volume", 00:18:13.419 "block_size": 4096, 00:18:13.419 "num_blocks": 7936, 00:18:13.419 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:13.419 "md_size": 32, 00:18:13.419 "md_interleave": false, 00:18:13.419 "dif_type": 0, 00:18:13.419 "assigned_rate_limits": { 00:18:13.419 "rw_ios_per_sec": 0, 00:18:13.419 "rw_mbytes_per_sec": 0, 00:18:13.419 "r_mbytes_per_sec": 0, 00:18:13.419 "w_mbytes_per_sec": 0 00:18:13.419 }, 00:18:13.419 "claimed": false, 00:18:13.419 "zoned": false, 00:18:13.419 "supported_io_types": { 00:18:13.419 "read": true, 00:18:13.419 "write": true, 00:18:13.419 "unmap": false, 00:18:13.419 "flush": false, 00:18:13.419 "reset": true, 00:18:13.419 "nvme_admin": false, 00:18:13.419 "nvme_io": false, 00:18:13.419 "nvme_io_md": false, 00:18:13.419 "write_zeroes": true, 00:18:13.419 "zcopy": false, 00:18:13.419 "get_zone_info": false, 00:18:13.419 "zone_management": false, 00:18:13.419 "zone_append": false, 00:18:13.419 "compare": false, 00:18:13.419 "compare_and_write": false, 00:18:13.419 "abort": false, 00:18:13.419 "seek_hole": false, 00:18:13.419 "seek_data": false, 00:18:13.419 "copy": false, 00:18:13.419 "nvme_iov_md": false 00:18:13.419 }, 00:18:13.419 "memory_domains": [ 00:18:13.419 { 00:18:13.419 "dma_device_id": "system", 00:18:13.419 "dma_device_type": 1 00:18:13.419 }, 00:18:13.419 { 00:18:13.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.419 "dma_device_type": 2 00:18:13.419 }, 00:18:13.419 { 00:18:13.419 "dma_device_id": "system", 00:18:13.419 "dma_device_type": 1 00:18:13.419 }, 00:18:13.419 { 00:18:13.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.419 "dma_device_type": 2 00:18:13.419 } 00:18:13.419 ], 00:18:13.419 "driver_specific": { 00:18:13.419 "raid": { 00:18:13.419 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:13.419 "strip_size_kb": 0, 00:18:13.419 "state": "online", 00:18:13.419 "raid_level": "raid1", 00:18:13.419 "superblock": true, 00:18:13.419 "num_base_bdevs": 2, 00:18:13.419 "num_base_bdevs_discovered": 2, 00:18:13.419 "num_base_bdevs_operational": 2, 00:18:13.419 "base_bdevs_list": [ 00:18:13.419 { 00:18:13.419 "name": "pt1", 00:18:13.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.419 "is_configured": true, 00:18:13.419 "data_offset": 256, 00:18:13.419 "data_size": 7936 00:18:13.419 }, 00:18:13.419 { 00:18:13.419 "name": "pt2", 00:18:13.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.419 "is_configured": true, 00:18:13.419 "data_offset": 256, 00:18:13.419 "data_size": 7936 00:18:13.419 } 00:18:13.419 ] 00:18:13.419 } 00:18:13.419 } 00:18:13.419 }' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:13.419 pt2' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.419 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 [2024-11-18 13:34:43.499961] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3d589c23-572f-4012-91bc-ce79ab406619 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 3d589c23-572f-4012-91bc-ce79ab406619 ']' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 [2024-11-18 13:34:43.539654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.680 [2024-11-18 13:34:43.539679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.680 [2024-11-18 13:34:43.539749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.680 [2024-11-18 13:34:43.539797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.680 [2024-11-18 13:34:43.539810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.680 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.681 [2024-11-18 13:34:43.651482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:13.681 [2024-11-18 13:34:43.653231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:13.681 [2024-11-18 13:34:43.653300] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:13.681 [2024-11-18 13:34:43.653343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:13.681 [2024-11-18 13:34:43.653356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.681 [2024-11-18 13:34:43.653366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:13.681 request: 00:18:13.681 { 00:18:13.681 "name": "raid_bdev1", 00:18:13.681 "raid_level": "raid1", 00:18:13.681 "base_bdevs": [ 00:18:13.681 "malloc1", 00:18:13.681 "malloc2" 00:18:13.681 ], 00:18:13.681 "superblock": false, 00:18:13.681 "method": "bdev_raid_create", 00:18:13.681 "req_id": 1 00:18:13.681 } 00:18:13.681 Got JSON-RPC error response 00:18:13.681 response: 00:18:13.681 { 00:18:13.681 "code": -17, 00:18:13.681 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:13.681 } 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.681 [2024-11-18 13:34:43.715353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.681 [2024-11-18 13:34:43.715401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.681 [2024-11-18 13:34:43.715415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:13.681 [2024-11-18 13:34:43.715425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.681 [2024-11-18 13:34:43.717218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.681 [2024-11-18 13:34:43.717254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.681 [2024-11-18 13:34:43.717290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.681 [2024-11-18 13:34:43.717334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.681 pt1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.681 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.941 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.941 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.941 "name": "raid_bdev1", 00:18:13.941 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:13.941 "strip_size_kb": 0, 00:18:13.941 "state": "configuring", 00:18:13.941 "raid_level": "raid1", 00:18:13.941 "superblock": true, 00:18:13.941 "num_base_bdevs": 2, 00:18:13.941 "num_base_bdevs_discovered": 1, 00:18:13.941 "num_base_bdevs_operational": 2, 00:18:13.941 "base_bdevs_list": [ 00:18:13.941 { 00:18:13.941 "name": "pt1", 00:18:13.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.941 "is_configured": true, 00:18:13.941 "data_offset": 256, 00:18:13.941 "data_size": 7936 00:18:13.941 }, 00:18:13.941 { 00:18:13.941 "name": null, 00:18:13.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.941 "is_configured": false, 00:18:13.941 "data_offset": 256, 00:18:13.941 "data_size": 7936 00:18:13.941 } 00:18:13.941 ] 00:18:13.941 }' 00:18:13.941 13:34:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.941 13:34:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.201 [2024-11-18 13:34:44.138689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.201 [2024-11-18 13:34:44.138748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.201 [2024-11-18 13:34:44.138765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:14.201 [2024-11-18 13:34:44.138775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.201 [2024-11-18 13:34:44.138933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.201 [2024-11-18 13:34:44.138953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.201 [2024-11-18 13:34:44.138987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:14.201 [2024-11-18 13:34:44.139005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.201 [2024-11-18 13:34:44.139099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:14.201 [2024-11-18 13:34:44.139115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.201 [2024-11-18 13:34:44.139190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:14.201 [2024-11-18 13:34:44.139290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:14.201 [2024-11-18 13:34:44.139307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:14.201 [2024-11-18 13:34:44.139401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.201 pt2 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.201 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.202 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.202 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.202 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.202 "name": "raid_bdev1", 00:18:14.202 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:14.202 "strip_size_kb": 0, 00:18:14.202 "state": "online", 00:18:14.202 "raid_level": "raid1", 00:18:14.202 "superblock": true, 00:18:14.202 "num_base_bdevs": 2, 00:18:14.202 "num_base_bdevs_discovered": 2, 00:18:14.202 "num_base_bdevs_operational": 2, 00:18:14.202 "base_bdevs_list": [ 00:18:14.202 { 00:18:14.202 "name": "pt1", 00:18:14.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.202 "is_configured": true, 00:18:14.202 "data_offset": 256, 00:18:14.202 "data_size": 7936 00:18:14.202 }, 00:18:14.202 { 00:18:14.202 "name": "pt2", 00:18:14.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.202 "is_configured": true, 00:18:14.202 "data_offset": 256, 00:18:14.202 "data_size": 7936 00:18:14.202 } 00:18:14.202 ] 00:18:14.202 }' 00:18:14.202 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.202 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.772 [2024-11-18 13:34:44.634058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.772 "name": "raid_bdev1", 00:18:14.772 "aliases": [ 00:18:14.772 "3d589c23-572f-4012-91bc-ce79ab406619" 00:18:14.772 ], 00:18:14.772 "product_name": "Raid Volume", 00:18:14.772 "block_size": 4096, 00:18:14.772 "num_blocks": 7936, 00:18:14.772 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:14.772 "md_size": 32, 00:18:14.772 "md_interleave": false, 00:18:14.772 "dif_type": 0, 00:18:14.772 "assigned_rate_limits": { 00:18:14.772 "rw_ios_per_sec": 0, 00:18:14.772 "rw_mbytes_per_sec": 0, 00:18:14.772 "r_mbytes_per_sec": 0, 00:18:14.772 "w_mbytes_per_sec": 0 00:18:14.772 }, 00:18:14.772 "claimed": false, 00:18:14.772 "zoned": false, 00:18:14.772 "supported_io_types": { 00:18:14.772 "read": true, 00:18:14.772 "write": true, 00:18:14.772 "unmap": false, 00:18:14.772 "flush": false, 00:18:14.772 "reset": true, 00:18:14.772 "nvme_admin": false, 00:18:14.772 "nvme_io": false, 00:18:14.772 "nvme_io_md": false, 00:18:14.772 "write_zeroes": true, 00:18:14.772 "zcopy": false, 00:18:14.772 "get_zone_info": false, 00:18:14.772 "zone_management": false, 00:18:14.772 "zone_append": false, 00:18:14.772 "compare": false, 00:18:14.772 "compare_and_write": false, 00:18:14.772 "abort": false, 00:18:14.772 "seek_hole": false, 00:18:14.772 "seek_data": false, 00:18:14.772 "copy": false, 00:18:14.772 "nvme_iov_md": false 00:18:14.772 }, 00:18:14.772 "memory_domains": [ 00:18:14.772 { 00:18:14.772 "dma_device_id": "system", 00:18:14.772 "dma_device_type": 1 00:18:14.772 }, 00:18:14.772 { 00:18:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.772 "dma_device_type": 2 00:18:14.772 }, 00:18:14.772 { 00:18:14.772 "dma_device_id": "system", 00:18:14.772 "dma_device_type": 1 00:18:14.772 }, 00:18:14.772 { 00:18:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.772 "dma_device_type": 2 00:18:14.772 } 00:18:14.772 ], 00:18:14.772 "driver_specific": { 00:18:14.772 "raid": { 00:18:14.772 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:14.772 "strip_size_kb": 0, 00:18:14.772 "state": "online", 00:18:14.772 "raid_level": "raid1", 00:18:14.772 "superblock": true, 00:18:14.772 "num_base_bdevs": 2, 00:18:14.772 "num_base_bdevs_discovered": 2, 00:18:14.772 "num_base_bdevs_operational": 2, 00:18:14.772 "base_bdevs_list": [ 00:18:14.772 { 00:18:14.772 "name": "pt1", 00:18:14.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.772 "is_configured": true, 00:18:14.772 "data_offset": 256, 00:18:14.772 "data_size": 7936 00:18:14.772 }, 00:18:14.772 { 00:18:14.772 "name": "pt2", 00:18:14.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.772 "is_configured": true, 00:18:14.772 "data_offset": 256, 00:18:14.772 "data_size": 7936 00:18:14.772 } 00:18:14.772 ] 00:18:14.772 } 00:18:14.772 } 00:18:14.772 }' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:14.772 pt2' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.772 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.032 [2024-11-18 13:34:44.869657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 3d589c23-572f-4012-91bc-ce79ab406619 '!=' 3d589c23-572f-4012-91bc-ce79ab406619 ']' 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.032 [2024-11-18 13:34:44.913372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.032 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.033 "name": "raid_bdev1", 00:18:15.033 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:15.033 "strip_size_kb": 0, 00:18:15.033 "state": "online", 00:18:15.033 "raid_level": "raid1", 00:18:15.033 "superblock": true, 00:18:15.033 "num_base_bdevs": 2, 00:18:15.033 "num_base_bdevs_discovered": 1, 00:18:15.033 "num_base_bdevs_operational": 1, 00:18:15.033 "base_bdevs_list": [ 00:18:15.033 { 00:18:15.033 "name": null, 00:18:15.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.033 "is_configured": false, 00:18:15.033 "data_offset": 0, 00:18:15.033 "data_size": 7936 00:18:15.033 }, 00:18:15.033 { 00:18:15.033 "name": "pt2", 00:18:15.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.033 "is_configured": true, 00:18:15.033 "data_offset": 256, 00:18:15.033 "data_size": 7936 00:18:15.033 } 00:18:15.033 ] 00:18:15.033 }' 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.033 13:34:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 [2024-11-18 13:34:45.388520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.622 [2024-11-18 13:34:45.388546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.622 [2024-11-18 13:34:45.388602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.622 [2024-11-18 13:34:45.388640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.622 [2024-11-18 13:34:45.388650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 [2024-11-18 13:34:45.472391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.622 [2024-11-18 13:34:45.472444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.622 [2024-11-18 13:34:45.472461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:15.622 [2024-11-18 13:34:45.472471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.622 [2024-11-18 13:34:45.474311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.622 [2024-11-18 13:34:45.474347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.622 [2024-11-18 13:34:45.474386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.622 [2024-11-18 13:34:45.474431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.622 [2024-11-18 13:34:45.474502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:15.622 [2024-11-18 13:34:45.474525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:15.622 [2024-11-18 13:34:45.474592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:15.622 [2024-11-18 13:34:45.474700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:15.622 [2024-11-18 13:34:45.474712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:15.622 [2024-11-18 13:34:45.474796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.622 pt2 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.622 "name": "raid_bdev1", 00:18:15.622 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:15.622 "strip_size_kb": 0, 00:18:15.622 "state": "online", 00:18:15.622 "raid_level": "raid1", 00:18:15.622 "superblock": true, 00:18:15.622 "num_base_bdevs": 2, 00:18:15.622 "num_base_bdevs_discovered": 1, 00:18:15.622 "num_base_bdevs_operational": 1, 00:18:15.622 "base_bdevs_list": [ 00:18:15.622 { 00:18:15.622 "name": null, 00:18:15.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.622 "is_configured": false, 00:18:15.622 "data_offset": 256, 00:18:15.622 "data_size": 7936 00:18:15.622 }, 00:18:15.622 { 00:18:15.622 "name": "pt2", 00:18:15.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.622 "is_configured": true, 00:18:15.622 "data_offset": 256, 00:18:15.622 "data_size": 7936 00:18:15.622 } 00:18:15.622 ] 00:18:15.622 }' 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.622 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.882 [2024-11-18 13:34:45.879638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.882 [2024-11-18 13:34:45.879665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.882 [2024-11-18 13:34:45.879714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.882 [2024-11-18 13:34:45.879754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.882 [2024-11-18 13:34:45.879765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.882 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.883 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:15.883 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.883 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.143 [2024-11-18 13:34:45.943560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:16.143 [2024-11-18 13:34:45.943609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.143 [2024-11-18 13:34:45.943626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:16.143 [2024-11-18 13:34:45.943633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.143 [2024-11-18 13:34:45.945443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.143 [2024-11-18 13:34:45.945553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:16.143 [2024-11-18 13:34:45.945602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:16.143 [2024-11-18 13:34:45.945640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:16.143 [2024-11-18 13:34:45.945756] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:16.143 [2024-11-18 13:34:45.945765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.143 [2024-11-18 13:34:45.945779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:16.143 [2024-11-18 13:34:45.945854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.143 [2024-11-18 13:34:45.945911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:16.143 [2024-11-18 13:34:45.945918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.143 [2024-11-18 13:34:45.945980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:16.143 [2024-11-18 13:34:45.946070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:16.143 [2024-11-18 13:34:45.946079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:16.143 [2024-11-18 13:34:45.946189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.143 pt1 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.143 "name": "raid_bdev1", 00:18:16.143 "uuid": "3d589c23-572f-4012-91bc-ce79ab406619", 00:18:16.143 "strip_size_kb": 0, 00:18:16.143 "state": "online", 00:18:16.143 "raid_level": "raid1", 00:18:16.143 "superblock": true, 00:18:16.143 "num_base_bdevs": 2, 00:18:16.143 "num_base_bdevs_discovered": 1, 00:18:16.143 "num_base_bdevs_operational": 1, 00:18:16.143 "base_bdevs_list": [ 00:18:16.143 { 00:18:16.143 "name": null, 00:18:16.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.143 "is_configured": false, 00:18:16.143 "data_offset": 256, 00:18:16.143 "data_size": 7936 00:18:16.143 }, 00:18:16.143 { 00:18:16.143 "name": "pt2", 00:18:16.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.143 "is_configured": true, 00:18:16.143 "data_offset": 256, 00:18:16.143 "data_size": 7936 00:18:16.143 } 00:18:16.143 ] 00:18:16.143 }' 00:18:16.143 13:34:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.143 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.403 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.664 [2024-11-18 13:34:46.458939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 3d589c23-572f-4012-91bc-ce79ab406619 '!=' 3d589c23-572f-4012-91bc-ce79ab406619 ']' 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87384 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87384 ']' 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87384 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87384 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.664 killing process with pid 87384 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87384' 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87384 00:18:16.664 [2024-11-18 13:34:46.526558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.664 [2024-11-18 13:34:46.526629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.664 [2024-11-18 13:34:46.526664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.664 [2024-11-18 13:34:46.526679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:16.664 13:34:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87384 00:18:16.924 [2024-11-18 13:34:46.739304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.864 13:34:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:17.864 00:18:17.864 real 0m6.027s 00:18:17.864 user 0m9.172s 00:18:17.864 sys 0m1.112s 00:18:17.864 ************************************ 00:18:17.864 END TEST raid_superblock_test_md_separate 00:18:17.864 ************************************ 00:18:17.864 13:34:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.864 13:34:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.864 13:34:47 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:17.864 13:34:47 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:17.864 13:34:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:17.864 13:34:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.864 13:34:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.864 ************************************ 00:18:17.864 START TEST raid_rebuild_test_sb_md_separate 00:18:17.864 ************************************ 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87714 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87714 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87714 ']' 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.864 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.865 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.865 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.865 13:34:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.125 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:18.125 Zero copy mechanism will not be used. 00:18:18.125 [2024-11-18 13:34:47.939516] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:18.125 [2024-11-18 13:34:47.939626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87714 ] 00:18:18.125 [2024-11-18 13:34:48.113526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.385 [2024-11-18 13:34:48.219290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.385 [2024-11-18 13:34:48.407878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.385 [2024-11-18 13:34:48.407915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.956 BaseBdev1_malloc 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.956 [2024-11-18 13:34:48.790804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:18.956 [2024-11-18 13:34:48.790933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.956 [2024-11-18 13:34:48.790976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:18.956 [2024-11-18 13:34:48.791007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.956 [2024-11-18 13:34:48.792857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.956 [2024-11-18 13:34:48.792938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.956 BaseBdev1 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.956 BaseBdev2_malloc 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.956 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 [2024-11-18 13:34:48.844516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:18.957 [2024-11-18 13:34:48.844572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.957 [2024-11-18 13:34:48.844591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:18.957 [2024-11-18 13:34:48.844601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.957 [2024-11-18 13:34:48.846304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.957 [2024-11-18 13:34:48.846339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:18.957 BaseBdev2 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 spare_malloc 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 spare_delay 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 [2024-11-18 13:34:48.947618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.957 [2024-11-18 13:34:48.947716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.957 [2024-11-18 13:34:48.947757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:18.957 [2024-11-18 13:34:48.947788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.957 [2024-11-18 13:34:48.949588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.957 [2024-11-18 13:34:48.949673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.957 spare 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 [2024-11-18 13:34:48.959635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.957 [2024-11-18 13:34:48.961331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.957 [2024-11-18 13:34:48.961538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.957 [2024-11-18 13:34:48.961573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:18.957 [2024-11-18 13:34:48.961666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:18.957 [2024-11-18 13:34:48.961808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.957 [2024-11-18 13:34:48.961843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:18.957 [2024-11-18 13:34:48.961952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.957 13:34:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.217 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.217 "name": "raid_bdev1", 00:18:19.217 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:19.217 "strip_size_kb": 0, 00:18:19.217 "state": "online", 00:18:19.217 "raid_level": "raid1", 00:18:19.217 "superblock": true, 00:18:19.217 "num_base_bdevs": 2, 00:18:19.217 "num_base_bdevs_discovered": 2, 00:18:19.217 "num_base_bdevs_operational": 2, 00:18:19.217 "base_bdevs_list": [ 00:18:19.217 { 00:18:19.217 "name": "BaseBdev1", 00:18:19.217 "uuid": "bbb4b87a-1848-5715-963b-99058718be7e", 00:18:19.217 "is_configured": true, 00:18:19.217 "data_offset": 256, 00:18:19.217 "data_size": 7936 00:18:19.217 }, 00:18:19.217 { 00:18:19.217 "name": "BaseBdev2", 00:18:19.217 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:19.217 "is_configured": true, 00:18:19.217 "data_offset": 256, 00:18:19.217 "data_size": 7936 00:18:19.217 } 00:18:19.217 ] 00:18:19.217 }' 00:18:19.217 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.217 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 [2024-11-18 13:34:49.371230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.477 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:19.737 [2024-11-18 13:34:49.610584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:19.737 /dev/nbd0 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.737 1+0 records in 00:18:19.737 1+0 records out 00:18:19.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523245 s, 7.8 MB/s 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:19.737 13:34:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:20.308 7936+0 records in 00:18:20.308 7936+0 records out 00:18:20.308 32505856 bytes (33 MB, 31 MiB) copied, 0.643597 s, 50.5 MB/s 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.308 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.568 [2024-11-18 13:34:50.546075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.568 [2024-11-18 13:34:50.562159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.568 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.828 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.828 "name": "raid_bdev1", 00:18:20.828 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:20.828 "strip_size_kb": 0, 00:18:20.828 "state": "online", 00:18:20.828 "raid_level": "raid1", 00:18:20.828 "superblock": true, 00:18:20.828 "num_base_bdevs": 2, 00:18:20.828 "num_base_bdevs_discovered": 1, 00:18:20.828 "num_base_bdevs_operational": 1, 00:18:20.828 "base_bdevs_list": [ 00:18:20.828 { 00:18:20.828 "name": null, 00:18:20.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.828 "is_configured": false, 00:18:20.828 "data_offset": 0, 00:18:20.828 "data_size": 7936 00:18:20.828 }, 00:18:20.828 { 00:18:20.828 "name": "BaseBdev2", 00:18:20.828 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:20.828 "is_configured": true, 00:18:20.828 "data_offset": 256, 00:18:20.828 "data_size": 7936 00:18:20.828 } 00:18:20.828 ] 00:18:20.828 }' 00:18:20.828 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.828 13:34:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.088 13:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.088 13:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.088 13:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.088 [2024-11-18 13:34:51.037296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.088 [2024-11-18 13:34:51.051596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:21.088 13:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.088 13:34:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:21.088 [2024-11-18 13:34:51.053297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.028 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.288 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.288 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.288 "name": "raid_bdev1", 00:18:22.288 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:22.288 "strip_size_kb": 0, 00:18:22.288 "state": "online", 00:18:22.288 "raid_level": "raid1", 00:18:22.288 "superblock": true, 00:18:22.288 "num_base_bdevs": 2, 00:18:22.288 "num_base_bdevs_discovered": 2, 00:18:22.288 "num_base_bdevs_operational": 2, 00:18:22.288 "process": { 00:18:22.288 "type": "rebuild", 00:18:22.288 "target": "spare", 00:18:22.288 "progress": { 00:18:22.288 "blocks": 2560, 00:18:22.289 "percent": 32 00:18:22.289 } 00:18:22.289 }, 00:18:22.289 "base_bdevs_list": [ 00:18:22.289 { 00:18:22.289 "name": "spare", 00:18:22.289 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 256, 00:18:22.289 "data_size": 7936 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "name": "BaseBdev2", 00:18:22.289 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 256, 00:18:22.289 "data_size": 7936 00:18:22.289 } 00:18:22.289 ] 00:18:22.289 }' 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.289 [2024-11-18 13:34:52.212961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.289 [2024-11-18 13:34:52.257959] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.289 [2024-11-18 13:34:52.258064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.289 [2024-11-18 13:34:52.258095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.289 [2024-11-18 13:34:52.258117] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.289 "name": "raid_bdev1", 00:18:22.289 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:22.289 "strip_size_kb": 0, 00:18:22.289 "state": "online", 00:18:22.289 "raid_level": "raid1", 00:18:22.289 "superblock": true, 00:18:22.289 "num_base_bdevs": 2, 00:18:22.289 "num_base_bdevs_discovered": 1, 00:18:22.289 "num_base_bdevs_operational": 1, 00:18:22.289 "base_bdevs_list": [ 00:18:22.289 { 00:18:22.289 "name": null, 00:18:22.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.289 "is_configured": false, 00:18:22.289 "data_offset": 0, 00:18:22.289 "data_size": 7936 00:18:22.289 }, 00:18:22.289 { 00:18:22.289 "name": "BaseBdev2", 00:18:22.289 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:22.289 "is_configured": true, 00:18:22.289 "data_offset": 256, 00:18:22.289 "data_size": 7936 00:18:22.289 } 00:18:22.289 ] 00:18:22.289 }' 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.289 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.860 "name": "raid_bdev1", 00:18:22.860 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:22.860 "strip_size_kb": 0, 00:18:22.860 "state": "online", 00:18:22.860 "raid_level": "raid1", 00:18:22.860 "superblock": true, 00:18:22.860 "num_base_bdevs": 2, 00:18:22.860 "num_base_bdevs_discovered": 1, 00:18:22.860 "num_base_bdevs_operational": 1, 00:18:22.860 "base_bdevs_list": [ 00:18:22.860 { 00:18:22.860 "name": null, 00:18:22.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.860 "is_configured": false, 00:18:22.860 "data_offset": 0, 00:18:22.860 "data_size": 7936 00:18:22.860 }, 00:18:22.860 { 00:18:22.860 "name": "BaseBdev2", 00:18:22.860 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:22.860 "is_configured": true, 00:18:22.860 "data_offset": 256, 00:18:22.860 "data_size": 7936 00:18:22.860 } 00:18:22.860 ] 00:18:22.860 }' 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.860 [2024-11-18 13:34:52.836332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.860 [2024-11-18 13:34:52.849866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.860 13:34:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:22.860 [2024-11-18 13:34:52.851623] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.243 "name": "raid_bdev1", 00:18:24.243 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:24.243 "strip_size_kb": 0, 00:18:24.243 "state": "online", 00:18:24.243 "raid_level": "raid1", 00:18:24.243 "superblock": true, 00:18:24.243 "num_base_bdevs": 2, 00:18:24.243 "num_base_bdevs_discovered": 2, 00:18:24.243 "num_base_bdevs_operational": 2, 00:18:24.243 "process": { 00:18:24.243 "type": "rebuild", 00:18:24.243 "target": "spare", 00:18:24.243 "progress": { 00:18:24.243 "blocks": 2560, 00:18:24.243 "percent": 32 00:18:24.243 } 00:18:24.243 }, 00:18:24.243 "base_bdevs_list": [ 00:18:24.243 { 00:18:24.243 "name": "spare", 00:18:24.243 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:24.243 "is_configured": true, 00:18:24.243 "data_offset": 256, 00:18:24.243 "data_size": 7936 00:18:24.243 }, 00:18:24.243 { 00:18:24.243 "name": "BaseBdev2", 00:18:24.243 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:24.243 "is_configured": true, 00:18:24.243 "data_offset": 256, 00:18:24.243 "data_size": 7936 00:18:24.243 } 00:18:24.243 ] 00:18:24.243 }' 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.243 13:34:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:24.243 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=708 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.243 "name": "raid_bdev1", 00:18:24.243 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:24.243 "strip_size_kb": 0, 00:18:24.243 "state": "online", 00:18:24.243 "raid_level": "raid1", 00:18:24.243 "superblock": true, 00:18:24.243 "num_base_bdevs": 2, 00:18:24.243 "num_base_bdevs_discovered": 2, 00:18:24.243 "num_base_bdevs_operational": 2, 00:18:24.243 "process": { 00:18:24.243 "type": "rebuild", 00:18:24.243 "target": "spare", 00:18:24.243 "progress": { 00:18:24.243 "blocks": 2816, 00:18:24.243 "percent": 35 00:18:24.243 } 00:18:24.243 }, 00:18:24.243 "base_bdevs_list": [ 00:18:24.243 { 00:18:24.243 "name": "spare", 00:18:24.243 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:24.243 "is_configured": true, 00:18:24.243 "data_offset": 256, 00:18:24.243 "data_size": 7936 00:18:24.243 }, 00:18:24.243 { 00:18:24.243 "name": "BaseBdev2", 00:18:24.243 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:24.243 "is_configured": true, 00:18:24.243 "data_offset": 256, 00:18:24.243 "data_size": 7936 00:18:24.243 } 00:18:24.243 ] 00:18:24.243 }' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.243 13:34:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.184 "name": "raid_bdev1", 00:18:25.184 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:25.184 "strip_size_kb": 0, 00:18:25.184 "state": "online", 00:18:25.184 "raid_level": "raid1", 00:18:25.184 "superblock": true, 00:18:25.184 "num_base_bdevs": 2, 00:18:25.184 "num_base_bdevs_discovered": 2, 00:18:25.184 "num_base_bdevs_operational": 2, 00:18:25.184 "process": { 00:18:25.184 "type": "rebuild", 00:18:25.184 "target": "spare", 00:18:25.184 "progress": { 00:18:25.184 "blocks": 5888, 00:18:25.184 "percent": 74 00:18:25.184 } 00:18:25.184 }, 00:18:25.184 "base_bdevs_list": [ 00:18:25.184 { 00:18:25.184 "name": "spare", 00:18:25.184 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:25.184 "is_configured": true, 00:18:25.184 "data_offset": 256, 00:18:25.184 "data_size": 7936 00:18:25.184 }, 00:18:25.184 { 00:18:25.184 "name": "BaseBdev2", 00:18:25.184 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:25.184 "is_configured": true, 00:18:25.184 "data_offset": 256, 00:18:25.184 "data_size": 7936 00:18:25.184 } 00:18:25.184 ] 00:18:25.184 }' 00:18:25.184 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.444 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.444 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.444 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.444 13:34:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.013 [2024-11-18 13:34:55.963364] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:26.013 [2024-11-18 13:34:55.963440] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:26.013 [2024-11-18 13:34:55.963533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.273 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.273 "name": "raid_bdev1", 00:18:26.273 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:26.273 "strip_size_kb": 0, 00:18:26.273 "state": "online", 00:18:26.273 "raid_level": "raid1", 00:18:26.273 "superblock": true, 00:18:26.273 "num_base_bdevs": 2, 00:18:26.273 "num_base_bdevs_discovered": 2, 00:18:26.273 "num_base_bdevs_operational": 2, 00:18:26.273 "base_bdevs_list": [ 00:18:26.273 { 00:18:26.273 "name": "spare", 00:18:26.273 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:26.273 "is_configured": true, 00:18:26.273 "data_offset": 256, 00:18:26.273 "data_size": 7936 00:18:26.273 }, 00:18:26.273 { 00:18:26.273 "name": "BaseBdev2", 00:18:26.273 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:26.273 "is_configured": true, 00:18:26.273 "data_offset": 256, 00:18:26.273 "data_size": 7936 00:18:26.273 } 00:18:26.273 ] 00:18:26.273 }' 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.533 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.533 "name": "raid_bdev1", 00:18:26.533 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:26.533 "strip_size_kb": 0, 00:18:26.533 "state": "online", 00:18:26.533 "raid_level": "raid1", 00:18:26.534 "superblock": true, 00:18:26.534 "num_base_bdevs": 2, 00:18:26.534 "num_base_bdevs_discovered": 2, 00:18:26.534 "num_base_bdevs_operational": 2, 00:18:26.534 "base_bdevs_list": [ 00:18:26.534 { 00:18:26.534 "name": "spare", 00:18:26.534 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:26.534 "is_configured": true, 00:18:26.534 "data_offset": 256, 00:18:26.534 "data_size": 7936 00:18:26.534 }, 00:18:26.534 { 00:18:26.534 "name": "BaseBdev2", 00:18:26.534 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:26.534 "is_configured": true, 00:18:26.534 "data_offset": 256, 00:18:26.534 "data_size": 7936 00:18:26.534 } 00:18:26.534 ] 00:18:26.534 }' 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.534 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.794 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.794 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.794 "name": "raid_bdev1", 00:18:26.794 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:26.794 "strip_size_kb": 0, 00:18:26.794 "state": "online", 00:18:26.794 "raid_level": "raid1", 00:18:26.794 "superblock": true, 00:18:26.794 "num_base_bdevs": 2, 00:18:26.794 "num_base_bdevs_discovered": 2, 00:18:26.794 "num_base_bdevs_operational": 2, 00:18:26.794 "base_bdevs_list": [ 00:18:26.794 { 00:18:26.794 "name": "spare", 00:18:26.794 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:26.794 "is_configured": true, 00:18:26.794 "data_offset": 256, 00:18:26.794 "data_size": 7936 00:18:26.794 }, 00:18:26.794 { 00:18:26.794 "name": "BaseBdev2", 00:18:26.794 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:26.794 "is_configured": true, 00:18:26.794 "data_offset": 256, 00:18:26.794 "data_size": 7936 00:18:26.794 } 00:18:26.794 ] 00:18:26.794 }' 00:18:26.794 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.794 13:34:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.054 [2024-11-18 13:34:57.032709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.054 [2024-11-18 13:34:57.032779] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.054 [2024-11-18 13:34:57.032887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.054 [2024-11-18 13:34:57.032963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.054 [2024-11-18 13:34:57.032996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:27.054 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:27.314 /dev/nbd0 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.314 1+0 records in 00:18:27.314 1+0 records out 00:18:27.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519242 s, 7.9 MB/s 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:27.314 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:27.574 /dev/nbd1 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.574 1+0 records in 00:18:27.574 1+0 records out 00:18:27.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376951 s, 10.9 MB/s 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:27.574 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:27.834 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.835 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.095 13:34:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.356 [2024-11-18 13:34:58.229664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.356 [2024-11-18 13:34:58.229756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.356 [2024-11-18 13:34:58.229795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:28.356 [2024-11-18 13:34:58.229822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.356 [2024-11-18 13:34:58.231724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.356 [2024-11-18 13:34:58.231797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.356 [2024-11-18 13:34:58.231877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:28.356 [2024-11-18 13:34:58.231955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.356 [2024-11-18 13:34:58.232091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.356 spare 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.356 [2024-11-18 13:34:58.332025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:28.356 [2024-11-18 13:34:58.332089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:28.356 [2024-11-18 13:34:58.332214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:28.356 [2024-11-18 13:34:58.332364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:28.356 [2024-11-18 13:34:58.332400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:28.356 [2024-11-18 13:34:58.332563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.356 "name": "raid_bdev1", 00:18:28.356 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:28.356 "strip_size_kb": 0, 00:18:28.356 "state": "online", 00:18:28.356 "raid_level": "raid1", 00:18:28.356 "superblock": true, 00:18:28.356 "num_base_bdevs": 2, 00:18:28.356 "num_base_bdevs_discovered": 2, 00:18:28.356 "num_base_bdevs_operational": 2, 00:18:28.356 "base_bdevs_list": [ 00:18:28.356 { 00:18:28.356 "name": "spare", 00:18:28.356 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:28.356 "is_configured": true, 00:18:28.356 "data_offset": 256, 00:18:28.356 "data_size": 7936 00:18:28.356 }, 00:18:28.356 { 00:18:28.356 "name": "BaseBdev2", 00:18:28.356 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:28.356 "is_configured": true, 00:18:28.356 "data_offset": 256, 00:18:28.356 "data_size": 7936 00:18:28.356 } 00:18:28.356 ] 00:18:28.356 }' 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.356 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.926 "name": "raid_bdev1", 00:18:28.926 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:28.926 "strip_size_kb": 0, 00:18:28.926 "state": "online", 00:18:28.926 "raid_level": "raid1", 00:18:28.926 "superblock": true, 00:18:28.926 "num_base_bdevs": 2, 00:18:28.926 "num_base_bdevs_discovered": 2, 00:18:28.926 "num_base_bdevs_operational": 2, 00:18:28.926 "base_bdevs_list": [ 00:18:28.926 { 00:18:28.926 "name": "spare", 00:18:28.926 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:28.926 "is_configured": true, 00:18:28.926 "data_offset": 256, 00:18:28.926 "data_size": 7936 00:18:28.926 }, 00:18:28.926 { 00:18:28.926 "name": "BaseBdev2", 00:18:28.926 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:28.926 "is_configured": true, 00:18:28.926 "data_offset": 256, 00:18:28.926 "data_size": 7936 00:18:28.926 } 00:18:28.926 ] 00:18:28.926 }' 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.926 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.927 [2024-11-18 13:34:58.940453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.927 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.186 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.186 "name": "raid_bdev1", 00:18:29.186 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:29.186 "strip_size_kb": 0, 00:18:29.186 "state": "online", 00:18:29.186 "raid_level": "raid1", 00:18:29.186 "superblock": true, 00:18:29.186 "num_base_bdevs": 2, 00:18:29.186 "num_base_bdevs_discovered": 1, 00:18:29.186 "num_base_bdevs_operational": 1, 00:18:29.186 "base_bdevs_list": [ 00:18:29.186 { 00:18:29.186 "name": null, 00:18:29.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.186 "is_configured": false, 00:18:29.186 "data_offset": 0, 00:18:29.186 "data_size": 7936 00:18:29.186 }, 00:18:29.186 { 00:18:29.186 "name": "BaseBdev2", 00:18:29.186 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:29.186 "is_configured": true, 00:18:29.186 "data_offset": 256, 00:18:29.186 "data_size": 7936 00:18:29.186 } 00:18:29.186 ] 00:18:29.186 }' 00:18:29.186 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.186 13:34:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.445 13:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:29.446 13:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.446 13:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.446 [2024-11-18 13:34:59.375753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.446 [2024-11-18 13:34:59.375918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:29.446 [2024-11-18 13:34:59.375975] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:29.446 [2024-11-18 13:34:59.376028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.446 [2024-11-18 13:34:59.388894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:29.446 13:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.446 13:34:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:29.446 [2024-11-18 13:34:59.390635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.385 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.644 "name": "raid_bdev1", 00:18:30.644 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:30.644 "strip_size_kb": 0, 00:18:30.644 "state": "online", 00:18:30.644 "raid_level": "raid1", 00:18:30.644 "superblock": true, 00:18:30.644 "num_base_bdevs": 2, 00:18:30.644 "num_base_bdevs_discovered": 2, 00:18:30.644 "num_base_bdevs_operational": 2, 00:18:30.644 "process": { 00:18:30.644 "type": "rebuild", 00:18:30.644 "target": "spare", 00:18:30.644 "progress": { 00:18:30.644 "blocks": 2560, 00:18:30.644 "percent": 32 00:18:30.644 } 00:18:30.644 }, 00:18:30.644 "base_bdevs_list": [ 00:18:30.644 { 00:18:30.644 "name": "spare", 00:18:30.644 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:30.644 "is_configured": true, 00:18:30.644 "data_offset": 256, 00:18:30.644 "data_size": 7936 00:18:30.644 }, 00:18:30.644 { 00:18:30.644 "name": "BaseBdev2", 00:18:30.644 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:30.644 "is_configured": true, 00:18:30.644 "data_offset": 256, 00:18:30.644 "data_size": 7936 00:18:30.644 } 00:18:30.644 ] 00:18:30.644 }' 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.644 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.644 [2024-11-18 13:35:00.531076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.644 [2024-11-18 13:35:00.595388] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.644 [2024-11-18 13:35:00.595442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.644 [2024-11-18 13:35:00.595456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.644 [2024-11-18 13:35:00.595474] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.645 "name": "raid_bdev1", 00:18:30.645 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:30.645 "strip_size_kb": 0, 00:18:30.645 "state": "online", 00:18:30.645 "raid_level": "raid1", 00:18:30.645 "superblock": true, 00:18:30.645 "num_base_bdevs": 2, 00:18:30.645 "num_base_bdevs_discovered": 1, 00:18:30.645 "num_base_bdevs_operational": 1, 00:18:30.645 "base_bdevs_list": [ 00:18:30.645 { 00:18:30.645 "name": null, 00:18:30.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.645 "is_configured": false, 00:18:30.645 "data_offset": 0, 00:18:30.645 "data_size": 7936 00:18:30.645 }, 00:18:30.645 { 00:18:30.645 "name": "BaseBdev2", 00:18:30.645 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:30.645 "is_configured": true, 00:18:30.645 "data_offset": 256, 00:18:30.645 "data_size": 7936 00:18:30.645 } 00:18:30.645 ] 00:18:30.645 }' 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.645 13:35:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 13:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.214 13:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.214 13:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.214 [2024-11-18 13:35:01.089416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.214 [2024-11-18 13:35:01.089514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.214 [2024-11-18 13:35:01.089553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:31.214 [2024-11-18 13:35:01.089582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.214 [2024-11-18 13:35:01.089817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.214 [2024-11-18 13:35:01.089870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.214 [2024-11-18 13:35:01.089945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:31.214 [2024-11-18 13:35:01.089981] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.214 [2024-11-18 13:35:01.090041] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:31.214 [2024-11-18 13:35:01.090080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.214 spare 00:18:31.214 [2024-11-18 13:35:01.103387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:31.214 13:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.214 13:35:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:31.214 [2024-11-18 13:35:01.105102] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.154 "name": "raid_bdev1", 00:18:32.154 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:32.154 "strip_size_kb": 0, 00:18:32.154 "state": "online", 00:18:32.154 "raid_level": "raid1", 00:18:32.154 "superblock": true, 00:18:32.154 "num_base_bdevs": 2, 00:18:32.154 "num_base_bdevs_discovered": 2, 00:18:32.154 "num_base_bdevs_operational": 2, 00:18:32.154 "process": { 00:18:32.154 "type": "rebuild", 00:18:32.154 "target": "spare", 00:18:32.154 "progress": { 00:18:32.154 "blocks": 2560, 00:18:32.154 "percent": 32 00:18:32.154 } 00:18:32.154 }, 00:18:32.154 "base_bdevs_list": [ 00:18:32.154 { 00:18:32.154 "name": "spare", 00:18:32.154 "uuid": "9d39783b-8598-5b7b-a8ee-946c1ed21d1b", 00:18:32.154 "is_configured": true, 00:18:32.154 "data_offset": 256, 00:18:32.154 "data_size": 7936 00:18:32.154 }, 00:18:32.154 { 00:18:32.154 "name": "BaseBdev2", 00:18:32.154 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:32.154 "is_configured": true, 00:18:32.154 "data_offset": 256, 00:18:32.154 "data_size": 7936 00:18:32.154 } 00:18:32.154 ] 00:18:32.154 }' 00:18:32.154 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 [2024-11-18 13:35:02.269617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.414 [2024-11-18 13:35:02.309499] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.414 [2024-11-18 13:35:02.309594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.414 [2024-11-18 13:35:02.309628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.414 [2024-11-18 13:35:02.309647] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.414 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.414 "name": "raid_bdev1", 00:18:32.414 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:32.414 "strip_size_kb": 0, 00:18:32.414 "state": "online", 00:18:32.414 "raid_level": "raid1", 00:18:32.414 "superblock": true, 00:18:32.414 "num_base_bdevs": 2, 00:18:32.414 "num_base_bdevs_discovered": 1, 00:18:32.414 "num_base_bdevs_operational": 1, 00:18:32.414 "base_bdevs_list": [ 00:18:32.414 { 00:18:32.414 "name": null, 00:18:32.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.415 "is_configured": false, 00:18:32.415 "data_offset": 0, 00:18:32.415 "data_size": 7936 00:18:32.415 }, 00:18:32.415 { 00:18:32.415 "name": "BaseBdev2", 00:18:32.415 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:32.415 "is_configured": true, 00:18:32.415 "data_offset": 256, 00:18:32.415 "data_size": 7936 00:18:32.415 } 00:18:32.415 ] 00:18:32.415 }' 00:18:32.415 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.415 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.984 "name": "raid_bdev1", 00:18:32.984 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:32.984 "strip_size_kb": 0, 00:18:32.984 "state": "online", 00:18:32.984 "raid_level": "raid1", 00:18:32.984 "superblock": true, 00:18:32.984 "num_base_bdevs": 2, 00:18:32.984 "num_base_bdevs_discovered": 1, 00:18:32.984 "num_base_bdevs_operational": 1, 00:18:32.984 "base_bdevs_list": [ 00:18:32.984 { 00:18:32.984 "name": null, 00:18:32.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.984 "is_configured": false, 00:18:32.984 "data_offset": 0, 00:18:32.984 "data_size": 7936 00:18:32.984 }, 00:18:32.984 { 00:18:32.984 "name": "BaseBdev2", 00:18:32.984 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:32.984 "is_configured": true, 00:18:32.984 "data_offset": 256, 00:18:32.984 "data_size": 7936 00:18:32.984 } 00:18:32.984 ] 00:18:32.984 }' 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.984 [2024-11-18 13:35:02.903597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:32.984 [2024-11-18 13:35:02.903646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.984 [2024-11-18 13:35:02.903667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:32.984 [2024-11-18 13:35:02.903676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.984 [2024-11-18 13:35:02.903875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.984 [2024-11-18 13:35:02.903887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.984 [2024-11-18 13:35:02.903931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:32.984 [2024-11-18 13:35:02.903942] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.984 [2024-11-18 13:35:02.903950] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:32.984 [2024-11-18 13:35:02.903959] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:32.984 BaseBdev1 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.984 13:35:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:33.924 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.924 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.924 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.924 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.925 "name": "raid_bdev1", 00:18:33.925 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:33.925 "strip_size_kb": 0, 00:18:33.925 "state": "online", 00:18:33.925 "raid_level": "raid1", 00:18:33.925 "superblock": true, 00:18:33.925 "num_base_bdevs": 2, 00:18:33.925 "num_base_bdevs_discovered": 1, 00:18:33.925 "num_base_bdevs_operational": 1, 00:18:33.925 "base_bdevs_list": [ 00:18:33.925 { 00:18:33.925 "name": null, 00:18:33.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.925 "is_configured": false, 00:18:33.925 "data_offset": 0, 00:18:33.925 "data_size": 7936 00:18:33.925 }, 00:18:33.925 { 00:18:33.925 "name": "BaseBdev2", 00:18:33.925 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:33.925 "is_configured": true, 00:18:33.925 "data_offset": 256, 00:18:33.925 "data_size": 7936 00:18:33.925 } 00:18:33.925 ] 00:18:33.925 }' 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.925 13:35:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.495 "name": "raid_bdev1", 00:18:34.495 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:34.495 "strip_size_kb": 0, 00:18:34.495 "state": "online", 00:18:34.495 "raid_level": "raid1", 00:18:34.495 "superblock": true, 00:18:34.495 "num_base_bdevs": 2, 00:18:34.495 "num_base_bdevs_discovered": 1, 00:18:34.495 "num_base_bdevs_operational": 1, 00:18:34.495 "base_bdevs_list": [ 00:18:34.495 { 00:18:34.495 "name": null, 00:18:34.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.495 "is_configured": false, 00:18:34.495 "data_offset": 0, 00:18:34.495 "data_size": 7936 00:18:34.495 }, 00:18:34.495 { 00:18:34.495 "name": "BaseBdev2", 00:18:34.495 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:34.495 "is_configured": true, 00:18:34.495 "data_offset": 256, 00:18:34.495 "data_size": 7936 00:18:34.495 } 00:18:34.495 ] 00:18:34.495 }' 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.495 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.496 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:34.496 [2024-11-18 13:35:04.544919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.496 [2024-11-18 13:35:04.545083] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.496 [2024-11-18 13:35:04.545156] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:34.756 request: 00:18:34.756 { 00:18:34.756 "base_bdev": "BaseBdev1", 00:18:34.756 "raid_bdev": "raid_bdev1", 00:18:34.756 "method": "bdev_raid_add_base_bdev", 00:18:34.756 "req_id": 1 00:18:34.756 } 00:18:34.756 Got JSON-RPC error response 00:18:34.756 response: 00:18:34.756 { 00:18:34.756 "code": -22, 00:18:34.756 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:34.756 } 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.756 13:35:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:35.699 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.700 "name": "raid_bdev1", 00:18:35.700 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:35.700 "strip_size_kb": 0, 00:18:35.700 "state": "online", 00:18:35.700 "raid_level": "raid1", 00:18:35.700 "superblock": true, 00:18:35.700 "num_base_bdevs": 2, 00:18:35.700 "num_base_bdevs_discovered": 1, 00:18:35.700 "num_base_bdevs_operational": 1, 00:18:35.700 "base_bdevs_list": [ 00:18:35.700 { 00:18:35.700 "name": null, 00:18:35.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.700 "is_configured": false, 00:18:35.700 "data_offset": 0, 00:18:35.700 "data_size": 7936 00:18:35.700 }, 00:18:35.700 { 00:18:35.700 "name": "BaseBdev2", 00:18:35.700 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:35.700 "is_configured": true, 00:18:35.700 "data_offset": 256, 00:18:35.700 "data_size": 7936 00:18:35.700 } 00:18:35.700 ] 00:18:35.700 }' 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.700 13:35:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.969 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.241 "name": "raid_bdev1", 00:18:36.241 "uuid": "fa8a2b7a-b848-4866-921a-9223f854b57c", 00:18:36.241 "strip_size_kb": 0, 00:18:36.241 "state": "online", 00:18:36.241 "raid_level": "raid1", 00:18:36.241 "superblock": true, 00:18:36.241 "num_base_bdevs": 2, 00:18:36.241 "num_base_bdevs_discovered": 1, 00:18:36.241 "num_base_bdevs_operational": 1, 00:18:36.241 "base_bdevs_list": [ 00:18:36.241 { 00:18:36.241 "name": null, 00:18:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.241 "is_configured": false, 00:18:36.241 "data_offset": 0, 00:18:36.241 "data_size": 7936 00:18:36.241 }, 00:18:36.241 { 00:18:36.241 "name": "BaseBdev2", 00:18:36.241 "uuid": "e19eff5e-5d9b-5bc2-888c-88a575cba765", 00:18:36.241 "is_configured": true, 00:18:36.241 "data_offset": 256, 00:18:36.241 "data_size": 7936 00:18:36.241 } 00:18:36.241 ] 00:18:36.241 }' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87714 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87714 ']' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87714 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87714 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87714' 00:18:36.241 killing process with pid 87714 00:18:36.241 Received shutdown signal, test time was about 60.000000 seconds 00:18:36.241 00:18:36.241 Latency(us) 00:18:36.241 [2024-11-18T13:35:06.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.241 [2024-11-18T13:35:06.295Z] =================================================================================================================== 00:18:36.241 [2024-11-18T13:35:06.295Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87714 00:18:36.241 [2024-11-18 13:35:06.180717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.241 [2024-11-18 13:35:06.180818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.241 [2024-11-18 13:35:06.180858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.241 [2024-11-18 13:35:06.180869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:36.241 13:35:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87714 00:18:36.500 [2024-11-18 13:35:06.480453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.887 13:35:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:37.887 00:18:37.887 real 0m19.659s 00:18:37.887 user 0m25.664s 00:18:37.887 sys 0m2.633s 00:18:37.887 13:35:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.887 13:35:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 ************************************ 00:18:37.887 END TEST raid_rebuild_test_sb_md_separate 00:18:37.887 ************************************ 00:18:37.887 13:35:07 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:37.887 13:35:07 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:37.887 13:35:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:37.887 13:35:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.887 13:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 ************************************ 00:18:37.887 START TEST raid_state_function_test_sb_md_interleaved 00:18:37.887 ************************************ 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88401 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:37.887 Process raid pid: 88401 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88401' 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88401 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88401 ']' 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.887 13:35:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 [2024-11-18 13:35:07.686817] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:37.888 [2024-11-18 13:35:07.687002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.888 [2024-11-18 13:35:07.862805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.146 [2024-11-18 13:35:07.970979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.146 [2024-11-18 13:35:08.179272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.146 [2024-11-18 13:35:08.179382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.715 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.715 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:38.715 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:38.715 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.715 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.715 [2024-11-18 13:35:08.516537] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.715 [2024-11-18 13:35:08.516625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.716 [2024-11-18 13:35:08.516652] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.716 [2024-11-18 13:35:08.516674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.716 "name": "Existed_Raid", 00:18:38.716 "uuid": "fdd1db35-31be-47f1-bc03-af512082d786", 00:18:38.716 "strip_size_kb": 0, 00:18:38.716 "state": "configuring", 00:18:38.716 "raid_level": "raid1", 00:18:38.716 "superblock": true, 00:18:38.716 "num_base_bdevs": 2, 00:18:38.716 "num_base_bdevs_discovered": 0, 00:18:38.716 "num_base_bdevs_operational": 2, 00:18:38.716 "base_bdevs_list": [ 00:18:38.716 { 00:18:38.716 "name": "BaseBdev1", 00:18:38.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.716 "is_configured": false, 00:18:38.716 "data_offset": 0, 00:18:38.716 "data_size": 0 00:18:38.716 }, 00:18:38.716 { 00:18:38.716 "name": "BaseBdev2", 00:18:38.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.716 "is_configured": false, 00:18:38.716 "data_offset": 0, 00:18:38.716 "data_size": 0 00:18:38.716 } 00:18:38.716 ] 00:18:38.716 }' 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.716 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 [2024-11-18 13:35:08.951763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:38.976 [2024-11-18 13:35:08.951831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 [2024-11-18 13:35:08.963750] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.976 [2024-11-18 13:35:08.963821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.976 [2024-11-18 13:35:08.963845] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.976 [2024-11-18 13:35:08.963869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.976 13:35:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 [2024-11-18 13:35:09.009983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.976 BaseBdev1 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.976 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.236 [ 00:18:39.236 { 00:18:39.236 "name": "BaseBdev1", 00:18:39.236 "aliases": [ 00:18:39.236 "9e241324-dea3-4d92-947a-c1506b239030" 00:18:39.236 ], 00:18:39.236 "product_name": "Malloc disk", 00:18:39.236 "block_size": 4128, 00:18:39.236 "num_blocks": 8192, 00:18:39.236 "uuid": "9e241324-dea3-4d92-947a-c1506b239030", 00:18:39.236 "md_size": 32, 00:18:39.236 "md_interleave": true, 00:18:39.236 "dif_type": 0, 00:18:39.237 "assigned_rate_limits": { 00:18:39.237 "rw_ios_per_sec": 0, 00:18:39.237 "rw_mbytes_per_sec": 0, 00:18:39.237 "r_mbytes_per_sec": 0, 00:18:39.237 "w_mbytes_per_sec": 0 00:18:39.237 }, 00:18:39.237 "claimed": true, 00:18:39.237 "claim_type": "exclusive_write", 00:18:39.237 "zoned": false, 00:18:39.237 "supported_io_types": { 00:18:39.237 "read": true, 00:18:39.237 "write": true, 00:18:39.237 "unmap": true, 00:18:39.237 "flush": true, 00:18:39.237 "reset": true, 00:18:39.237 "nvme_admin": false, 00:18:39.237 "nvme_io": false, 00:18:39.237 "nvme_io_md": false, 00:18:39.237 "write_zeroes": true, 00:18:39.237 "zcopy": true, 00:18:39.237 "get_zone_info": false, 00:18:39.237 "zone_management": false, 00:18:39.237 "zone_append": false, 00:18:39.237 "compare": false, 00:18:39.237 "compare_and_write": false, 00:18:39.237 "abort": true, 00:18:39.237 "seek_hole": false, 00:18:39.237 "seek_data": false, 00:18:39.237 "copy": true, 00:18:39.237 "nvme_iov_md": false 00:18:39.237 }, 00:18:39.237 "memory_domains": [ 00:18:39.237 { 00:18:39.237 "dma_device_id": "system", 00:18:39.237 "dma_device_type": 1 00:18:39.237 }, 00:18:39.237 { 00:18:39.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.237 "dma_device_type": 2 00:18:39.237 } 00:18:39.237 ], 00:18:39.237 "driver_specific": {} 00:18:39.237 } 00:18:39.237 ] 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.237 "name": "Existed_Raid", 00:18:39.237 "uuid": "d0d9919e-c7a2-489e-a2ea-7da7b0fd910f", 00:18:39.237 "strip_size_kb": 0, 00:18:39.237 "state": "configuring", 00:18:39.237 "raid_level": "raid1", 00:18:39.237 "superblock": true, 00:18:39.237 "num_base_bdevs": 2, 00:18:39.237 "num_base_bdevs_discovered": 1, 00:18:39.237 "num_base_bdevs_operational": 2, 00:18:39.237 "base_bdevs_list": [ 00:18:39.237 { 00:18:39.237 "name": "BaseBdev1", 00:18:39.237 "uuid": "9e241324-dea3-4d92-947a-c1506b239030", 00:18:39.237 "is_configured": true, 00:18:39.237 "data_offset": 256, 00:18:39.237 "data_size": 7936 00:18:39.237 }, 00:18:39.237 { 00:18:39.237 "name": "BaseBdev2", 00:18:39.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.237 "is_configured": false, 00:18:39.237 "data_offset": 0, 00:18:39.237 "data_size": 0 00:18:39.237 } 00:18:39.237 ] 00:18:39.237 }' 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.237 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.497 [2024-11-18 13:35:09.521125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.497 [2024-11-18 13:35:09.521204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.497 [2024-11-18 13:35:09.533172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.497 [2024-11-18 13:35:09.534832] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.497 [2024-11-18 13:35:09.534912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.497 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.756 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.756 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.756 "name": "Existed_Raid", 00:18:39.756 "uuid": "cbf27830-5661-4a19-ab23-c16418065d5f", 00:18:39.756 "strip_size_kb": 0, 00:18:39.756 "state": "configuring", 00:18:39.756 "raid_level": "raid1", 00:18:39.756 "superblock": true, 00:18:39.757 "num_base_bdevs": 2, 00:18:39.757 "num_base_bdevs_discovered": 1, 00:18:39.757 "num_base_bdevs_operational": 2, 00:18:39.757 "base_bdevs_list": [ 00:18:39.757 { 00:18:39.757 "name": "BaseBdev1", 00:18:39.757 "uuid": "9e241324-dea3-4d92-947a-c1506b239030", 00:18:39.757 "is_configured": true, 00:18:39.757 "data_offset": 256, 00:18:39.757 "data_size": 7936 00:18:39.757 }, 00:18:39.757 { 00:18:39.757 "name": "BaseBdev2", 00:18:39.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.757 "is_configured": false, 00:18:39.757 "data_offset": 0, 00:18:39.757 "data_size": 0 00:18:39.757 } 00:18:39.757 ] 00:18:39.757 }' 00:18:39.757 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.757 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.017 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:40.017 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.017 13:35:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.017 BaseBdev2 00:18:40.017 [2024-11-18 13:35:10.022479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.017 [2024-11-18 13:35:10.022665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:40.017 [2024-11-18 13:35:10.022678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:40.017 [2024-11-18 13:35:10.022761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:40.017 [2024-11-18 13:35:10.022828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:40.017 [2024-11-18 13:35:10.022838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:40.017 [2024-11-18 13:35:10.022893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.017 [ 00:18:40.017 { 00:18:40.017 "name": "BaseBdev2", 00:18:40.017 "aliases": [ 00:18:40.017 "b11c81b3-ceae-4e1c-81ff-b2e511839274" 00:18:40.017 ], 00:18:40.017 "product_name": "Malloc disk", 00:18:40.017 "block_size": 4128, 00:18:40.017 "num_blocks": 8192, 00:18:40.017 "uuid": "b11c81b3-ceae-4e1c-81ff-b2e511839274", 00:18:40.017 "md_size": 32, 00:18:40.017 "md_interleave": true, 00:18:40.017 "dif_type": 0, 00:18:40.017 "assigned_rate_limits": { 00:18:40.017 "rw_ios_per_sec": 0, 00:18:40.017 "rw_mbytes_per_sec": 0, 00:18:40.017 "r_mbytes_per_sec": 0, 00:18:40.017 "w_mbytes_per_sec": 0 00:18:40.017 }, 00:18:40.017 "claimed": true, 00:18:40.017 "claim_type": "exclusive_write", 00:18:40.017 "zoned": false, 00:18:40.017 "supported_io_types": { 00:18:40.017 "read": true, 00:18:40.017 "write": true, 00:18:40.017 "unmap": true, 00:18:40.017 "flush": true, 00:18:40.017 "reset": true, 00:18:40.017 "nvme_admin": false, 00:18:40.017 "nvme_io": false, 00:18:40.017 "nvme_io_md": false, 00:18:40.017 "write_zeroes": true, 00:18:40.017 "zcopy": true, 00:18:40.017 "get_zone_info": false, 00:18:40.017 "zone_management": false, 00:18:40.017 "zone_append": false, 00:18:40.017 "compare": false, 00:18:40.017 "compare_and_write": false, 00:18:40.017 "abort": true, 00:18:40.017 "seek_hole": false, 00:18:40.017 "seek_data": false, 00:18:40.017 "copy": true, 00:18:40.017 "nvme_iov_md": false 00:18:40.017 }, 00:18:40.017 "memory_domains": [ 00:18:40.017 { 00:18:40.017 "dma_device_id": "system", 00:18:40.017 "dma_device_type": 1 00:18:40.017 }, 00:18:40.017 { 00:18:40.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.017 "dma_device_type": 2 00:18:40.017 } 00:18:40.017 ], 00:18:40.017 "driver_specific": {} 00:18:40.017 } 00:18:40.017 ] 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.017 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.277 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.277 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.277 "name": "Existed_Raid", 00:18:40.277 "uuid": "cbf27830-5661-4a19-ab23-c16418065d5f", 00:18:40.277 "strip_size_kb": 0, 00:18:40.277 "state": "online", 00:18:40.277 "raid_level": "raid1", 00:18:40.277 "superblock": true, 00:18:40.277 "num_base_bdevs": 2, 00:18:40.277 "num_base_bdevs_discovered": 2, 00:18:40.277 "num_base_bdevs_operational": 2, 00:18:40.277 "base_bdevs_list": [ 00:18:40.277 { 00:18:40.277 "name": "BaseBdev1", 00:18:40.278 "uuid": "9e241324-dea3-4d92-947a-c1506b239030", 00:18:40.278 "is_configured": true, 00:18:40.278 "data_offset": 256, 00:18:40.278 "data_size": 7936 00:18:40.278 }, 00:18:40.278 { 00:18:40.278 "name": "BaseBdev2", 00:18:40.278 "uuid": "b11c81b3-ceae-4e1c-81ff-b2e511839274", 00:18:40.278 "is_configured": true, 00:18:40.278 "data_offset": 256, 00:18:40.278 "data_size": 7936 00:18:40.278 } 00:18:40.278 ] 00:18:40.278 }' 00:18:40.278 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.278 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:40.538 [2024-11-18 13:35:10.533860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.538 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.538 "name": "Existed_Raid", 00:18:40.538 "aliases": [ 00:18:40.538 "cbf27830-5661-4a19-ab23-c16418065d5f" 00:18:40.538 ], 00:18:40.538 "product_name": "Raid Volume", 00:18:40.538 "block_size": 4128, 00:18:40.538 "num_blocks": 7936, 00:18:40.538 "uuid": "cbf27830-5661-4a19-ab23-c16418065d5f", 00:18:40.538 "md_size": 32, 00:18:40.538 "md_interleave": true, 00:18:40.538 "dif_type": 0, 00:18:40.538 "assigned_rate_limits": { 00:18:40.538 "rw_ios_per_sec": 0, 00:18:40.538 "rw_mbytes_per_sec": 0, 00:18:40.538 "r_mbytes_per_sec": 0, 00:18:40.538 "w_mbytes_per_sec": 0 00:18:40.538 }, 00:18:40.538 "claimed": false, 00:18:40.538 "zoned": false, 00:18:40.538 "supported_io_types": { 00:18:40.538 "read": true, 00:18:40.538 "write": true, 00:18:40.538 "unmap": false, 00:18:40.538 "flush": false, 00:18:40.538 "reset": true, 00:18:40.538 "nvme_admin": false, 00:18:40.538 "nvme_io": false, 00:18:40.538 "nvme_io_md": false, 00:18:40.538 "write_zeroes": true, 00:18:40.538 "zcopy": false, 00:18:40.538 "get_zone_info": false, 00:18:40.538 "zone_management": false, 00:18:40.538 "zone_append": false, 00:18:40.538 "compare": false, 00:18:40.538 "compare_and_write": false, 00:18:40.538 "abort": false, 00:18:40.538 "seek_hole": false, 00:18:40.538 "seek_data": false, 00:18:40.538 "copy": false, 00:18:40.538 "nvme_iov_md": false 00:18:40.538 }, 00:18:40.538 "memory_domains": [ 00:18:40.538 { 00:18:40.538 "dma_device_id": "system", 00:18:40.538 "dma_device_type": 1 00:18:40.538 }, 00:18:40.538 { 00:18:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.538 "dma_device_type": 2 00:18:40.538 }, 00:18:40.538 { 00:18:40.538 "dma_device_id": "system", 00:18:40.538 "dma_device_type": 1 00:18:40.538 }, 00:18:40.538 { 00:18:40.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.538 "dma_device_type": 2 00:18:40.538 } 00:18:40.538 ], 00:18:40.538 "driver_specific": { 00:18:40.538 "raid": { 00:18:40.538 "uuid": "cbf27830-5661-4a19-ab23-c16418065d5f", 00:18:40.538 "strip_size_kb": 0, 00:18:40.538 "state": "online", 00:18:40.538 "raid_level": "raid1", 00:18:40.538 "superblock": true, 00:18:40.538 "num_base_bdevs": 2, 00:18:40.539 "num_base_bdevs_discovered": 2, 00:18:40.539 "num_base_bdevs_operational": 2, 00:18:40.539 "base_bdevs_list": [ 00:18:40.539 { 00:18:40.539 "name": "BaseBdev1", 00:18:40.539 "uuid": "9e241324-dea3-4d92-947a-c1506b239030", 00:18:40.539 "is_configured": true, 00:18:40.539 "data_offset": 256, 00:18:40.539 "data_size": 7936 00:18:40.539 }, 00:18:40.539 { 00:18:40.539 "name": "BaseBdev2", 00:18:40.539 "uuid": "b11c81b3-ceae-4e1c-81ff-b2e511839274", 00:18:40.539 "is_configured": true, 00:18:40.539 "data_offset": 256, 00:18:40.539 "data_size": 7936 00:18:40.539 } 00:18:40.539 ] 00:18:40.539 } 00:18:40.539 } 00:18:40.539 }' 00:18:40.539 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:40.799 BaseBdev2' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.799 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.800 [2024-11-18 13:35:10.753271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.800 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.060 "name": "Existed_Raid", 00:18:41.060 "uuid": "cbf27830-5661-4a19-ab23-c16418065d5f", 00:18:41.060 "strip_size_kb": 0, 00:18:41.060 "state": "online", 00:18:41.060 "raid_level": "raid1", 00:18:41.060 "superblock": true, 00:18:41.060 "num_base_bdevs": 2, 00:18:41.060 "num_base_bdevs_discovered": 1, 00:18:41.060 "num_base_bdevs_operational": 1, 00:18:41.060 "base_bdevs_list": [ 00:18:41.060 { 00:18:41.060 "name": null, 00:18:41.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.060 "is_configured": false, 00:18:41.060 "data_offset": 0, 00:18:41.060 "data_size": 7936 00:18:41.060 }, 00:18:41.060 { 00:18:41.060 "name": "BaseBdev2", 00:18:41.060 "uuid": "b11c81b3-ceae-4e1c-81ff-b2e511839274", 00:18:41.060 "is_configured": true, 00:18:41.060 "data_offset": 256, 00:18:41.060 "data_size": 7936 00:18:41.060 } 00:18:41.060 ] 00:18:41.060 }' 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.060 13:35:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.320 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.321 [2024-11-18 13:35:11.290005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:41.321 [2024-11-18 13:35:11.290169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.581 [2024-11-18 13:35:11.380465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.581 [2024-11-18 13:35:11.380583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.581 [2024-11-18 13:35:11.380623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88401 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88401 ']' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88401 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88401 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.581 killing process with pid 88401 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88401' 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88401 00:18:41.581 [2024-11-18 13:35:11.479823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.581 13:35:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88401 00:18:41.581 [2024-11-18 13:35:11.495671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.523 13:35:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:42.523 00:18:42.523 real 0m4.947s 00:18:42.523 user 0m7.153s 00:18:42.523 sys 0m0.863s 00:18:42.523 13:35:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.523 13:35:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.523 ************************************ 00:18:42.523 END TEST raid_state_function_test_sb_md_interleaved 00:18:42.523 ************************************ 00:18:42.784 13:35:12 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:42.784 13:35:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:42.784 13:35:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.784 13:35:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.784 ************************************ 00:18:42.784 START TEST raid_superblock_test_md_interleaved 00:18:42.784 ************************************ 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88648 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88648 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88648 ']' 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.784 13:35:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.784 [2024-11-18 13:35:12.711609] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:42.784 [2024-11-18 13:35:12.711809] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88648 ] 00:18:43.044 [2024-11-18 13:35:12.892307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.044 [2024-11-18 13:35:13.000423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.304 [2024-11-18 13:35:13.183634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.304 [2024-11-18 13:35:13.183763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 malloc1 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 [2024-11-18 13:35:13.560090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.563 [2024-11-18 13:35:13.560264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.563 [2024-11-18 13:35:13.560303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.563 [2024-11-18 13:35:13.560342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.563 [2024-11-18 13:35:13.562033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.563 [2024-11-18 13:35:13.562104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.563 pt1 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.563 malloc2 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.563 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.823 [2024-11-18 13:35:13.617370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.823 [2024-11-18 13:35:13.617426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.823 [2024-11-18 13:35:13.617446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:43.823 [2024-11-18 13:35:13.617455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.823 [2024-11-18 13:35:13.619286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.823 [2024-11-18 13:35:13.619321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.823 pt2 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.823 [2024-11-18 13:35:13.629386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.823 [2024-11-18 13:35:13.631096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.823 [2024-11-18 13:35:13.631297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:43.823 [2024-11-18 13:35:13.631311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:43.823 [2024-11-18 13:35:13.631382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:43.823 [2024-11-18 13:35:13.631448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:43.823 [2024-11-18 13:35:13.631470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:43.823 [2024-11-18 13:35:13.631547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.823 "name": "raid_bdev1", 00:18:43.823 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:43.823 "strip_size_kb": 0, 00:18:43.823 "state": "online", 00:18:43.823 "raid_level": "raid1", 00:18:43.823 "superblock": true, 00:18:43.823 "num_base_bdevs": 2, 00:18:43.823 "num_base_bdevs_discovered": 2, 00:18:43.823 "num_base_bdevs_operational": 2, 00:18:43.823 "base_bdevs_list": [ 00:18:43.823 { 00:18:43.823 "name": "pt1", 00:18:43.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:43.823 "is_configured": true, 00:18:43.823 "data_offset": 256, 00:18:43.823 "data_size": 7936 00:18:43.823 }, 00:18:43.823 { 00:18:43.823 "name": "pt2", 00:18:43.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:43.823 "is_configured": true, 00:18:43.823 "data_offset": 256, 00:18:43.823 "data_size": 7936 00:18:43.823 } 00:18:43.823 ] 00:18:43.823 }' 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.823 13:35:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.084 [2024-11-18 13:35:14.076833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.084 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:44.084 "name": "raid_bdev1", 00:18:44.084 "aliases": [ 00:18:44.084 "08f03f67-f7b9-4c9e-889f-1e98cc217c1e" 00:18:44.084 ], 00:18:44.084 "product_name": "Raid Volume", 00:18:44.084 "block_size": 4128, 00:18:44.084 "num_blocks": 7936, 00:18:44.084 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:44.084 "md_size": 32, 00:18:44.084 "md_interleave": true, 00:18:44.084 "dif_type": 0, 00:18:44.084 "assigned_rate_limits": { 00:18:44.084 "rw_ios_per_sec": 0, 00:18:44.084 "rw_mbytes_per_sec": 0, 00:18:44.084 "r_mbytes_per_sec": 0, 00:18:44.084 "w_mbytes_per_sec": 0 00:18:44.084 }, 00:18:44.084 "claimed": false, 00:18:44.084 "zoned": false, 00:18:44.084 "supported_io_types": { 00:18:44.084 "read": true, 00:18:44.084 "write": true, 00:18:44.084 "unmap": false, 00:18:44.084 "flush": false, 00:18:44.084 "reset": true, 00:18:44.084 "nvme_admin": false, 00:18:44.084 "nvme_io": false, 00:18:44.084 "nvme_io_md": false, 00:18:44.084 "write_zeroes": true, 00:18:44.084 "zcopy": false, 00:18:44.084 "get_zone_info": false, 00:18:44.084 "zone_management": false, 00:18:44.084 "zone_append": false, 00:18:44.084 "compare": false, 00:18:44.084 "compare_and_write": false, 00:18:44.084 "abort": false, 00:18:44.084 "seek_hole": false, 00:18:44.084 "seek_data": false, 00:18:44.084 "copy": false, 00:18:44.084 "nvme_iov_md": false 00:18:44.084 }, 00:18:44.084 "memory_domains": [ 00:18:44.084 { 00:18:44.084 "dma_device_id": "system", 00:18:44.084 "dma_device_type": 1 00:18:44.084 }, 00:18:44.084 { 00:18:44.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.084 "dma_device_type": 2 00:18:44.084 }, 00:18:44.084 { 00:18:44.084 "dma_device_id": "system", 00:18:44.084 "dma_device_type": 1 00:18:44.084 }, 00:18:44.084 { 00:18:44.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.084 "dma_device_type": 2 00:18:44.084 } 00:18:44.084 ], 00:18:44.084 "driver_specific": { 00:18:44.084 "raid": { 00:18:44.084 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:44.084 "strip_size_kb": 0, 00:18:44.084 "state": "online", 00:18:44.084 "raid_level": "raid1", 00:18:44.084 "superblock": true, 00:18:44.084 "num_base_bdevs": 2, 00:18:44.084 "num_base_bdevs_discovered": 2, 00:18:44.084 "num_base_bdevs_operational": 2, 00:18:44.084 "base_bdevs_list": [ 00:18:44.084 { 00:18:44.084 "name": "pt1", 00:18:44.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.084 "is_configured": true, 00:18:44.084 "data_offset": 256, 00:18:44.084 "data_size": 7936 00:18:44.084 }, 00:18:44.084 { 00:18:44.085 "name": "pt2", 00:18:44.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.085 "is_configured": true, 00:18:44.085 "data_offset": 256, 00:18:44.085 "data_size": 7936 00:18:44.085 } 00:18:44.085 ] 00:18:44.085 } 00:18:44.085 } 00:18:44.085 }' 00:18:44.085 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:44.345 pt2' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.345 [2024-11-18 13:35:14.320388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=08f03f67-f7b9-4c9e-889f-1e98cc217c1e 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 08f03f67-f7b9-4c9e-889f-1e98cc217c1e ']' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.345 [2024-11-18 13:35:14.368048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.345 [2024-11-18 13:35:14.368115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.345 [2024-11-18 13:35:14.368216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.345 [2024-11-18 13:35:14.368293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.345 [2024-11-18 13:35:14.368345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.345 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 [2024-11-18 13:35:14.507826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:44.606 [2024-11-18 13:35:14.509600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:44.606 [2024-11-18 13:35:14.509662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:44.606 [2024-11-18 13:35:14.509711] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:44.606 [2024-11-18 13:35:14.509724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.606 [2024-11-18 13:35:14.509733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:44.606 request: 00:18:44.606 { 00:18:44.606 "name": "raid_bdev1", 00:18:44.606 "raid_level": "raid1", 00:18:44.606 "base_bdevs": [ 00:18:44.606 "malloc1", 00:18:44.606 "malloc2" 00:18:44.606 ], 00:18:44.606 "superblock": false, 00:18:44.606 "method": "bdev_raid_create", 00:18:44.606 "req_id": 1 00:18:44.606 } 00:18:44.606 Got JSON-RPC error response 00:18:44.606 response: 00:18:44.606 { 00:18:44.606 "code": -17, 00:18:44.606 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:44.606 } 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 [2024-11-18 13:35:14.571695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:44.606 [2024-11-18 13:35:14.571789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.606 [2024-11-18 13:35:14.571819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:44.606 [2024-11-18 13:35:14.571847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.606 [2024-11-18 13:35:14.573587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.606 [2024-11-18 13:35:14.573655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:44.606 [2024-11-18 13:35:14.573715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:44.606 [2024-11-18 13:35:14.573785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.606 pt1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.606 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.606 "name": "raid_bdev1", 00:18:44.606 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:44.606 "strip_size_kb": 0, 00:18:44.606 "state": "configuring", 00:18:44.606 "raid_level": "raid1", 00:18:44.606 "superblock": true, 00:18:44.606 "num_base_bdevs": 2, 00:18:44.606 "num_base_bdevs_discovered": 1, 00:18:44.606 "num_base_bdevs_operational": 2, 00:18:44.606 "base_bdevs_list": [ 00:18:44.606 { 00:18:44.606 "name": "pt1", 00:18:44.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:44.607 "is_configured": true, 00:18:44.607 "data_offset": 256, 00:18:44.607 "data_size": 7936 00:18:44.607 }, 00:18:44.607 { 00:18:44.607 "name": null, 00:18:44.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:44.607 "is_configured": false, 00:18:44.607 "data_offset": 256, 00:18:44.607 "data_size": 7936 00:18:44.607 } 00:18:44.607 ] 00:18:44.607 }' 00:18:44.607 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.607 13:35:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.177 [2024-11-18 13:35:15.034906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.177 [2024-11-18 13:35:15.035029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.177 [2024-11-18 13:35:15.035048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:45.177 [2024-11-18 13:35:15.035058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.177 [2024-11-18 13:35:15.035174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.177 [2024-11-18 13:35:15.035188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.177 [2024-11-18 13:35:15.035222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:45.177 [2024-11-18 13:35:15.035243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.177 [2024-11-18 13:35:15.035314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:45.177 [2024-11-18 13:35:15.035324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:45.177 [2024-11-18 13:35:15.035387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:45.177 [2024-11-18 13:35:15.035457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:45.177 [2024-11-18 13:35:15.035465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:45.177 [2024-11-18 13:35:15.035520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.177 pt2 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.177 "name": "raid_bdev1", 00:18:45.177 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:45.177 "strip_size_kb": 0, 00:18:45.177 "state": "online", 00:18:45.177 "raid_level": "raid1", 00:18:45.177 "superblock": true, 00:18:45.177 "num_base_bdevs": 2, 00:18:45.177 "num_base_bdevs_discovered": 2, 00:18:45.177 "num_base_bdevs_operational": 2, 00:18:45.177 "base_bdevs_list": [ 00:18:45.177 { 00:18:45.177 "name": "pt1", 00:18:45.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.177 "is_configured": true, 00:18:45.177 "data_offset": 256, 00:18:45.177 "data_size": 7936 00:18:45.177 }, 00:18:45.177 { 00:18:45.177 "name": "pt2", 00:18:45.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.177 "is_configured": true, 00:18:45.177 "data_offset": 256, 00:18:45.177 "data_size": 7936 00:18:45.177 } 00:18:45.177 ] 00:18:45.177 }' 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.177 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.747 [2024-11-18 13:35:15.526368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.747 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.747 "name": "raid_bdev1", 00:18:45.747 "aliases": [ 00:18:45.747 "08f03f67-f7b9-4c9e-889f-1e98cc217c1e" 00:18:45.747 ], 00:18:45.747 "product_name": "Raid Volume", 00:18:45.747 "block_size": 4128, 00:18:45.747 "num_blocks": 7936, 00:18:45.747 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:45.747 "md_size": 32, 00:18:45.747 "md_interleave": true, 00:18:45.747 "dif_type": 0, 00:18:45.747 "assigned_rate_limits": { 00:18:45.747 "rw_ios_per_sec": 0, 00:18:45.747 "rw_mbytes_per_sec": 0, 00:18:45.747 "r_mbytes_per_sec": 0, 00:18:45.747 "w_mbytes_per_sec": 0 00:18:45.747 }, 00:18:45.747 "claimed": false, 00:18:45.747 "zoned": false, 00:18:45.747 "supported_io_types": { 00:18:45.747 "read": true, 00:18:45.747 "write": true, 00:18:45.747 "unmap": false, 00:18:45.747 "flush": false, 00:18:45.747 "reset": true, 00:18:45.747 "nvme_admin": false, 00:18:45.747 "nvme_io": false, 00:18:45.747 "nvme_io_md": false, 00:18:45.747 "write_zeroes": true, 00:18:45.747 "zcopy": false, 00:18:45.747 "get_zone_info": false, 00:18:45.747 "zone_management": false, 00:18:45.747 "zone_append": false, 00:18:45.747 "compare": false, 00:18:45.747 "compare_and_write": false, 00:18:45.747 "abort": false, 00:18:45.747 "seek_hole": false, 00:18:45.748 "seek_data": false, 00:18:45.748 "copy": false, 00:18:45.748 "nvme_iov_md": false 00:18:45.748 }, 00:18:45.748 "memory_domains": [ 00:18:45.748 { 00:18:45.748 "dma_device_id": "system", 00:18:45.748 "dma_device_type": 1 00:18:45.748 }, 00:18:45.748 { 00:18:45.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.748 "dma_device_type": 2 00:18:45.748 }, 00:18:45.748 { 00:18:45.748 "dma_device_id": "system", 00:18:45.748 "dma_device_type": 1 00:18:45.748 }, 00:18:45.748 { 00:18:45.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.748 "dma_device_type": 2 00:18:45.748 } 00:18:45.748 ], 00:18:45.748 "driver_specific": { 00:18:45.748 "raid": { 00:18:45.748 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:45.748 "strip_size_kb": 0, 00:18:45.748 "state": "online", 00:18:45.748 "raid_level": "raid1", 00:18:45.748 "superblock": true, 00:18:45.748 "num_base_bdevs": 2, 00:18:45.748 "num_base_bdevs_discovered": 2, 00:18:45.748 "num_base_bdevs_operational": 2, 00:18:45.748 "base_bdevs_list": [ 00:18:45.748 { 00:18:45.748 "name": "pt1", 00:18:45.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.748 "is_configured": true, 00:18:45.748 "data_offset": 256, 00:18:45.748 "data_size": 7936 00:18:45.748 }, 00:18:45.748 { 00:18:45.748 "name": "pt2", 00:18:45.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.748 "is_configured": true, 00:18:45.748 "data_offset": 256, 00:18:45.748 "data_size": 7936 00:18:45.748 } 00:18:45.748 ] 00:18:45.748 } 00:18:45.748 } 00:18:45.748 }' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:45.748 pt2' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.748 [2024-11-18 13:35:15.749963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 08f03f67-f7b9-4c9e-889f-1e98cc217c1e '!=' 08f03f67-f7b9-4c9e-889f-1e98cc217c1e ']' 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.748 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.748 [2024-11-18 13:35:15.793694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.008 "name": "raid_bdev1", 00:18:46.008 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:46.008 "strip_size_kb": 0, 00:18:46.008 "state": "online", 00:18:46.008 "raid_level": "raid1", 00:18:46.008 "superblock": true, 00:18:46.008 "num_base_bdevs": 2, 00:18:46.008 "num_base_bdevs_discovered": 1, 00:18:46.008 "num_base_bdevs_operational": 1, 00:18:46.008 "base_bdevs_list": [ 00:18:46.008 { 00:18:46.008 "name": null, 00:18:46.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.008 "is_configured": false, 00:18:46.008 "data_offset": 0, 00:18:46.008 "data_size": 7936 00:18:46.008 }, 00:18:46.008 { 00:18:46.008 "name": "pt2", 00:18:46.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.008 "is_configured": true, 00:18:46.008 "data_offset": 256, 00:18:46.008 "data_size": 7936 00:18:46.008 } 00:18:46.008 ] 00:18:46.008 }' 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.008 13:35:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.268 [2024-11-18 13:35:16.248859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.268 [2024-11-18 13:35:16.248885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.268 [2024-11-18 13:35:16.248939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.268 [2024-11-18 13:35:16.248989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.268 [2024-11-18 13:35:16.249005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.268 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.528 [2024-11-18 13:35:16.320757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.528 [2024-11-18 13:35:16.320805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.528 [2024-11-18 13:35:16.320820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:46.528 [2024-11-18 13:35:16.320830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.528 [2024-11-18 13:35:16.322752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.528 [2024-11-18 13:35:16.322789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.528 [2024-11-18 13:35:16.322833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:46.528 [2024-11-18 13:35:16.322884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.528 [2024-11-18 13:35:16.322952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:46.528 [2024-11-18 13:35:16.322964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:46.528 [2024-11-18 13:35:16.323044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:46.528 [2024-11-18 13:35:16.323106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:46.528 [2024-11-18 13:35:16.323113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:46.528 [2024-11-18 13:35:16.323193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.528 pt2 00:18:46.528 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.528 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.528 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.529 "name": "raid_bdev1", 00:18:46.529 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:46.529 "strip_size_kb": 0, 00:18:46.529 "state": "online", 00:18:46.529 "raid_level": "raid1", 00:18:46.529 "superblock": true, 00:18:46.529 "num_base_bdevs": 2, 00:18:46.529 "num_base_bdevs_discovered": 1, 00:18:46.529 "num_base_bdevs_operational": 1, 00:18:46.529 "base_bdevs_list": [ 00:18:46.529 { 00:18:46.529 "name": null, 00:18:46.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.529 "is_configured": false, 00:18:46.529 "data_offset": 256, 00:18:46.529 "data_size": 7936 00:18:46.529 }, 00:18:46.529 { 00:18:46.529 "name": "pt2", 00:18:46.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.529 "is_configured": true, 00:18:46.529 "data_offset": 256, 00:18:46.529 "data_size": 7936 00:18:46.529 } 00:18:46.529 ] 00:18:46.529 }' 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.529 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.789 [2024-11-18 13:35:16.736012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.789 [2024-11-18 13:35:16.736039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.789 [2024-11-18 13:35:16.736086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.789 [2024-11-18 13:35:16.736122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.789 [2024-11-18 13:35:16.736142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.789 [2024-11-18 13:35:16.795941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.789 [2024-11-18 13:35:16.795988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.789 [2024-11-18 13:35:16.796004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:46.789 [2024-11-18 13:35:16.796013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.789 [2024-11-18 13:35:16.797759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.789 [2024-11-18 13:35:16.797790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.789 [2024-11-18 13:35:16.797830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:46.789 [2024-11-18 13:35:16.797872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:46.789 [2024-11-18 13:35:16.797946] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:46.789 [2024-11-18 13:35:16.797955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.789 [2024-11-18 13:35:16.797970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:46.789 [2024-11-18 13:35:16.798021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.789 [2024-11-18 13:35:16.798074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:46.789 [2024-11-18 13:35:16.798081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:46.789 [2024-11-18 13:35:16.798151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:46.789 [2024-11-18 13:35:16.798207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:46.789 [2024-11-18 13:35:16.798218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:46.789 [2024-11-18 13:35:16.798278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.789 pt1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.789 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.789 "name": "raid_bdev1", 00:18:46.790 "uuid": "08f03f67-f7b9-4c9e-889f-1e98cc217c1e", 00:18:46.790 "strip_size_kb": 0, 00:18:46.790 "state": "online", 00:18:46.790 "raid_level": "raid1", 00:18:46.790 "superblock": true, 00:18:46.790 "num_base_bdevs": 2, 00:18:46.790 "num_base_bdevs_discovered": 1, 00:18:46.790 "num_base_bdevs_operational": 1, 00:18:46.790 "base_bdevs_list": [ 00:18:46.790 { 00:18:46.790 "name": null, 00:18:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.790 "is_configured": false, 00:18:46.790 "data_offset": 256, 00:18:46.790 "data_size": 7936 00:18:46.790 }, 00:18:46.790 { 00:18:46.790 "name": "pt2", 00:18:46.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.790 "is_configured": true, 00:18:46.790 "data_offset": 256, 00:18:46.790 "data_size": 7936 00:18:46.790 } 00:18:46.790 ] 00:18:46.790 }' 00:18:46.790 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.790 13:35:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:47.360 [2024-11-18 13:35:17.235381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 08f03f67-f7b9-4c9e-889f-1e98cc217c1e '!=' 08f03f67-f7b9-4c9e-889f-1e98cc217c1e ']' 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88648 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88648 ']' 00:18:47.360 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88648 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88648 00:18:47.361 killing process with pid 88648 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88648' 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88648 00:18:47.361 [2024-11-18 13:35:17.306819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.361 [2024-11-18 13:35:17.306879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.361 [2024-11-18 13:35:17.306922] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.361 [2024-11-18 13:35:17.306934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:47.361 13:35:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88648 00:18:47.621 [2024-11-18 13:35:17.505211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.562 ************************************ 00:18:48.562 END TEST raid_superblock_test_md_interleaved 00:18:48.562 ************************************ 00:18:48.562 13:35:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:48.562 00:18:48.562 real 0m5.915s 00:18:48.562 user 0m8.967s 00:18:48.562 sys 0m1.104s 00:18:48.562 13:35:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.562 13:35:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.562 13:35:18 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:48.562 13:35:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:48.562 13:35:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.562 13:35:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.562 ************************************ 00:18:48.562 START TEST raid_rebuild_test_sb_md_interleaved 00:18:48.562 ************************************ 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:48.562 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88976 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88976 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88976 ']' 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.823 13:35:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.823 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:48.823 Zero copy mechanism will not be used. 00:18:48.823 [2024-11-18 13:35:18.707002] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:48.823 [2024-11-18 13:35:18.707105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88976 ] 00:18:49.083 [2024-11-18 13:35:18.880799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.083 [2024-11-18 13:35:18.986340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.342 [2024-11-18 13:35:19.158656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.342 [2024-11-18 13:35:19.158709] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 BaseBdev1_malloc 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 [2024-11-18 13:35:19.556572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:49.602 [2024-11-18 13:35:19.556637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.602 [2024-11-18 13:35:19.556655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:49.602 [2024-11-18 13:35:19.556666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.602 [2024-11-18 13:35:19.558385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.602 [2024-11-18 13:35:19.558421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.602 BaseBdev1 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 BaseBdev2_malloc 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.602 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.602 [2024-11-18 13:35:19.612651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:49.602 [2024-11-18 13:35:19.612713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.602 [2024-11-18 13:35:19.612731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:49.603 [2024-11-18 13:35:19.612743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.603 [2024-11-18 13:35:19.614406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.603 [2024-11-18 13:35:19.614442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:49.603 BaseBdev2 00:18:49.603 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.603 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:49.603 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.603 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.863 spare_malloc 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.863 spare_delay 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.863 [2024-11-18 13:35:19.711295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.863 [2024-11-18 13:35:19.711356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.863 [2024-11-18 13:35:19.711375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:49.863 [2024-11-18 13:35:19.711386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.863 [2024-11-18 13:35:19.713141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.863 [2024-11-18 13:35:19.713184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.863 spare 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.863 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.863 [2024-11-18 13:35:19.723317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.863 [2024-11-18 13:35:19.724970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.863 [2024-11-18 13:35:19.725163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:49.863 [2024-11-18 13:35:19.725180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:49.863 [2024-11-18 13:35:19.725252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:49.863 [2024-11-18 13:35:19.725319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:49.863 [2024-11-18 13:35:19.725326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:49.863 [2024-11-18 13:35:19.725387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.864 "name": "raid_bdev1", 00:18:49.864 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:49.864 "strip_size_kb": 0, 00:18:49.864 "state": "online", 00:18:49.864 "raid_level": "raid1", 00:18:49.864 "superblock": true, 00:18:49.864 "num_base_bdevs": 2, 00:18:49.864 "num_base_bdevs_discovered": 2, 00:18:49.864 "num_base_bdevs_operational": 2, 00:18:49.864 "base_bdevs_list": [ 00:18:49.864 { 00:18:49.864 "name": "BaseBdev1", 00:18:49.864 "uuid": "c6ebef0d-9c0d-59db-b33f-7e3f0ef2d393", 00:18:49.864 "is_configured": true, 00:18:49.864 "data_offset": 256, 00:18:49.864 "data_size": 7936 00:18:49.864 }, 00:18:49.864 { 00:18:49.864 "name": "BaseBdev2", 00:18:49.864 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:49.864 "is_configured": true, 00:18:49.864 "data_offset": 256, 00:18:49.864 "data_size": 7936 00:18:49.864 } 00:18:49.864 ] 00:18:49.864 }' 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.864 13:35:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.434 [2024-11-18 13:35:20.214749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.434 [2024-11-18 13:35:20.306347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.434 "name": "raid_bdev1", 00:18:50.434 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:50.434 "strip_size_kb": 0, 00:18:50.434 "state": "online", 00:18:50.434 "raid_level": "raid1", 00:18:50.434 "superblock": true, 00:18:50.434 "num_base_bdevs": 2, 00:18:50.434 "num_base_bdevs_discovered": 1, 00:18:50.434 "num_base_bdevs_operational": 1, 00:18:50.434 "base_bdevs_list": [ 00:18:50.434 { 00:18:50.434 "name": null, 00:18:50.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.434 "is_configured": false, 00:18:50.434 "data_offset": 0, 00:18:50.434 "data_size": 7936 00:18:50.434 }, 00:18:50.434 { 00:18:50.434 "name": "BaseBdev2", 00:18:50.434 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:50.434 "is_configured": true, 00:18:50.434 "data_offset": 256, 00:18:50.434 "data_size": 7936 00:18:50.434 } 00:18:50.434 ] 00:18:50.434 }' 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.434 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.004 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.004 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.004 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.004 [2024-11-18 13:35:20.777621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.004 [2024-11-18 13:35:20.794734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:51.004 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.004 13:35:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:51.004 [2024-11-18 13:35:20.796580] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.944 "name": "raid_bdev1", 00:18:51.944 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:51.944 "strip_size_kb": 0, 00:18:51.944 "state": "online", 00:18:51.944 "raid_level": "raid1", 00:18:51.944 "superblock": true, 00:18:51.944 "num_base_bdevs": 2, 00:18:51.944 "num_base_bdevs_discovered": 2, 00:18:51.944 "num_base_bdevs_operational": 2, 00:18:51.944 "process": { 00:18:51.944 "type": "rebuild", 00:18:51.944 "target": "spare", 00:18:51.944 "progress": { 00:18:51.944 "blocks": 2560, 00:18:51.944 "percent": 32 00:18:51.944 } 00:18:51.944 }, 00:18:51.944 "base_bdevs_list": [ 00:18:51.944 { 00:18:51.944 "name": "spare", 00:18:51.944 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:51.944 "is_configured": true, 00:18:51.944 "data_offset": 256, 00:18:51.944 "data_size": 7936 00:18:51.944 }, 00:18:51.944 { 00:18:51.944 "name": "BaseBdev2", 00:18:51.944 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:51.944 "is_configured": true, 00:18:51.944 "data_offset": 256, 00:18:51.944 "data_size": 7936 00:18:51.944 } 00:18:51.944 ] 00:18:51.944 }' 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.944 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.945 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:51.945 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.945 13:35:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.945 [2024-11-18 13:35:21.956291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.205 [2024-11-18 13:35:22.001216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.205 [2024-11-18 13:35:22.001275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.205 [2024-11-18 13:35:22.001290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.205 [2024-11-18 13:35:22.001302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.205 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.205 "name": "raid_bdev1", 00:18:52.205 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:52.205 "strip_size_kb": 0, 00:18:52.205 "state": "online", 00:18:52.205 "raid_level": "raid1", 00:18:52.205 "superblock": true, 00:18:52.205 "num_base_bdevs": 2, 00:18:52.206 "num_base_bdevs_discovered": 1, 00:18:52.206 "num_base_bdevs_operational": 1, 00:18:52.206 "base_bdevs_list": [ 00:18:52.206 { 00:18:52.206 "name": null, 00:18:52.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.206 "is_configured": false, 00:18:52.206 "data_offset": 0, 00:18:52.206 "data_size": 7936 00:18:52.206 }, 00:18:52.206 { 00:18:52.206 "name": "BaseBdev2", 00:18:52.206 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:52.206 "is_configured": true, 00:18:52.206 "data_offset": 256, 00:18:52.206 "data_size": 7936 00:18:52.206 } 00:18:52.206 ] 00:18:52.206 }' 00:18:52.206 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.206 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.466 "name": "raid_bdev1", 00:18:52.466 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:52.466 "strip_size_kb": 0, 00:18:52.466 "state": "online", 00:18:52.466 "raid_level": "raid1", 00:18:52.466 "superblock": true, 00:18:52.466 "num_base_bdevs": 2, 00:18:52.466 "num_base_bdevs_discovered": 1, 00:18:52.466 "num_base_bdevs_operational": 1, 00:18:52.466 "base_bdevs_list": [ 00:18:52.466 { 00:18:52.466 "name": null, 00:18:52.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.466 "is_configured": false, 00:18:52.466 "data_offset": 0, 00:18:52.466 "data_size": 7936 00:18:52.466 }, 00:18:52.466 { 00:18:52.466 "name": "BaseBdev2", 00:18:52.466 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:52.466 "is_configured": true, 00:18:52.466 "data_offset": 256, 00:18:52.466 "data_size": 7936 00:18:52.466 } 00:18:52.466 ] 00:18:52.466 }' 00:18:52.466 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.726 [2024-11-18 13:35:22.614673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.726 [2024-11-18 13:35:22.629773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.726 13:35:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:52.726 [2024-11-18 13:35:22.631588] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.666 "name": "raid_bdev1", 00:18:53.666 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:53.666 "strip_size_kb": 0, 00:18:53.666 "state": "online", 00:18:53.666 "raid_level": "raid1", 00:18:53.666 "superblock": true, 00:18:53.666 "num_base_bdevs": 2, 00:18:53.666 "num_base_bdevs_discovered": 2, 00:18:53.666 "num_base_bdevs_operational": 2, 00:18:53.666 "process": { 00:18:53.666 "type": "rebuild", 00:18:53.666 "target": "spare", 00:18:53.666 "progress": { 00:18:53.666 "blocks": 2560, 00:18:53.666 "percent": 32 00:18:53.666 } 00:18:53.666 }, 00:18:53.666 "base_bdevs_list": [ 00:18:53.666 { 00:18:53.666 "name": "spare", 00:18:53.666 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:53.666 "is_configured": true, 00:18:53.666 "data_offset": 256, 00:18:53.666 "data_size": 7936 00:18:53.666 }, 00:18:53.666 { 00:18:53.666 "name": "BaseBdev2", 00:18:53.666 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:53.666 "is_configured": true, 00:18:53.666 "data_offset": 256, 00:18:53.666 "data_size": 7936 00:18:53.666 } 00:18:53.666 ] 00:18:53.666 }' 00:18:53.666 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:53.927 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=737 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.927 "name": "raid_bdev1", 00:18:53.927 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:53.927 "strip_size_kb": 0, 00:18:53.927 "state": "online", 00:18:53.927 "raid_level": "raid1", 00:18:53.927 "superblock": true, 00:18:53.927 "num_base_bdevs": 2, 00:18:53.927 "num_base_bdevs_discovered": 2, 00:18:53.927 "num_base_bdevs_operational": 2, 00:18:53.927 "process": { 00:18:53.927 "type": "rebuild", 00:18:53.927 "target": "spare", 00:18:53.927 "progress": { 00:18:53.927 "blocks": 2816, 00:18:53.927 "percent": 35 00:18:53.927 } 00:18:53.927 }, 00:18:53.927 "base_bdevs_list": [ 00:18:53.927 { 00:18:53.927 "name": "spare", 00:18:53.927 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:53.927 "is_configured": true, 00:18:53.927 "data_offset": 256, 00:18:53.927 "data_size": 7936 00:18:53.927 }, 00:18:53.927 { 00:18:53.927 "name": "BaseBdev2", 00:18:53.927 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:53.927 "is_configured": true, 00:18:53.927 "data_offset": 256, 00:18:53.927 "data_size": 7936 00:18:53.927 } 00:18:53.927 ] 00:18:53.927 }' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.927 13:35:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.867 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.154 "name": "raid_bdev1", 00:18:55.154 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:55.154 "strip_size_kb": 0, 00:18:55.154 "state": "online", 00:18:55.154 "raid_level": "raid1", 00:18:55.154 "superblock": true, 00:18:55.154 "num_base_bdevs": 2, 00:18:55.154 "num_base_bdevs_discovered": 2, 00:18:55.154 "num_base_bdevs_operational": 2, 00:18:55.154 "process": { 00:18:55.154 "type": "rebuild", 00:18:55.154 "target": "spare", 00:18:55.154 "progress": { 00:18:55.154 "blocks": 5632, 00:18:55.154 "percent": 70 00:18:55.154 } 00:18:55.154 }, 00:18:55.154 "base_bdevs_list": [ 00:18:55.154 { 00:18:55.154 "name": "spare", 00:18:55.154 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:55.154 "is_configured": true, 00:18:55.154 "data_offset": 256, 00:18:55.154 "data_size": 7936 00:18:55.154 }, 00:18:55.154 { 00:18:55.154 "name": "BaseBdev2", 00:18:55.154 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:55.154 "is_configured": true, 00:18:55.154 "data_offset": 256, 00:18:55.154 "data_size": 7936 00:18:55.154 } 00:18:55.154 ] 00:18:55.154 }' 00:18:55.154 13:35:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.154 13:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.154 13:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.154 13:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.154 13:35:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.777 [2024-11-18 13:35:25.743230] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:55.777 [2024-11-18 13:35:25.743373] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:55.777 [2024-11-18 13:35:25.743469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.037 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.296 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.296 "name": "raid_bdev1", 00:18:56.296 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:56.296 "strip_size_kb": 0, 00:18:56.296 "state": "online", 00:18:56.296 "raid_level": "raid1", 00:18:56.296 "superblock": true, 00:18:56.296 "num_base_bdevs": 2, 00:18:56.296 "num_base_bdevs_discovered": 2, 00:18:56.296 "num_base_bdevs_operational": 2, 00:18:56.296 "base_bdevs_list": [ 00:18:56.296 { 00:18:56.296 "name": "spare", 00:18:56.296 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:56.296 "is_configured": true, 00:18:56.296 "data_offset": 256, 00:18:56.296 "data_size": 7936 00:18:56.297 }, 00:18:56.297 { 00:18:56.297 "name": "BaseBdev2", 00:18:56.297 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:56.297 "is_configured": true, 00:18:56.297 "data_offset": 256, 00:18:56.297 "data_size": 7936 00:18:56.297 } 00:18:56.297 ] 00:18:56.297 }' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.297 "name": "raid_bdev1", 00:18:56.297 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:56.297 "strip_size_kb": 0, 00:18:56.297 "state": "online", 00:18:56.297 "raid_level": "raid1", 00:18:56.297 "superblock": true, 00:18:56.297 "num_base_bdevs": 2, 00:18:56.297 "num_base_bdevs_discovered": 2, 00:18:56.297 "num_base_bdevs_operational": 2, 00:18:56.297 "base_bdevs_list": [ 00:18:56.297 { 00:18:56.297 "name": "spare", 00:18:56.297 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:56.297 "is_configured": true, 00:18:56.297 "data_offset": 256, 00:18:56.297 "data_size": 7936 00:18:56.297 }, 00:18:56.297 { 00:18:56.297 "name": "BaseBdev2", 00:18:56.297 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:56.297 "is_configured": true, 00:18:56.297 "data_offset": 256, 00:18:56.297 "data_size": 7936 00:18:56.297 } 00:18:56.297 ] 00:18:56.297 }' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.297 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.557 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.557 "name": "raid_bdev1", 00:18:56.557 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:56.557 "strip_size_kb": 0, 00:18:56.557 "state": "online", 00:18:56.557 "raid_level": "raid1", 00:18:56.557 "superblock": true, 00:18:56.557 "num_base_bdevs": 2, 00:18:56.557 "num_base_bdevs_discovered": 2, 00:18:56.557 "num_base_bdevs_operational": 2, 00:18:56.557 "base_bdevs_list": [ 00:18:56.557 { 00:18:56.557 "name": "spare", 00:18:56.557 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:56.557 "is_configured": true, 00:18:56.557 "data_offset": 256, 00:18:56.557 "data_size": 7936 00:18:56.557 }, 00:18:56.557 { 00:18:56.557 "name": "BaseBdev2", 00:18:56.557 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:56.557 "is_configured": true, 00:18:56.557 "data_offset": 256, 00:18:56.557 "data_size": 7936 00:18:56.557 } 00:18:56.557 ] 00:18:56.557 }' 00:18:56.557 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.557 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.816 [2024-11-18 13:35:26.766611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.816 [2024-11-18 13:35:26.766646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.816 [2024-11-18 13:35:26.766723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.816 [2024-11-18 13:35:26.766787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.816 [2024-11-18 13:35:26.766796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.816 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.817 [2024-11-18 13:35:26.838479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:56.817 [2024-11-18 13:35:26.838577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.817 [2024-11-18 13:35:26.838600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:56.817 [2024-11-18 13:35:26.838609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.817 [2024-11-18 13:35:26.840614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.817 [2024-11-18 13:35:26.840650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:56.817 [2024-11-18 13:35:26.840699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:56.817 [2024-11-18 13:35:26.840759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.817 [2024-11-18 13:35:26.840856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.817 spare 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.817 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.076 [2024-11-18 13:35:26.940735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:57.076 [2024-11-18 13:35:26.940766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:57.076 [2024-11-18 13:35:26.940848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:57.076 [2024-11-18 13:35:26.940918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:57.076 [2024-11-18 13:35:26.940925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:57.076 [2024-11-18 13:35:26.940995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.076 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.077 "name": "raid_bdev1", 00:18:57.077 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:57.077 "strip_size_kb": 0, 00:18:57.077 "state": "online", 00:18:57.077 "raid_level": "raid1", 00:18:57.077 "superblock": true, 00:18:57.077 "num_base_bdevs": 2, 00:18:57.077 "num_base_bdevs_discovered": 2, 00:18:57.077 "num_base_bdevs_operational": 2, 00:18:57.077 "base_bdevs_list": [ 00:18:57.077 { 00:18:57.077 "name": "spare", 00:18:57.077 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:57.077 "is_configured": true, 00:18:57.077 "data_offset": 256, 00:18:57.077 "data_size": 7936 00:18:57.077 }, 00:18:57.077 { 00:18:57.077 "name": "BaseBdev2", 00:18:57.077 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:57.077 "is_configured": true, 00:18:57.077 "data_offset": 256, 00:18:57.077 "data_size": 7936 00:18:57.077 } 00:18:57.077 ] 00:18:57.077 }' 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.077 13:35:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.647 "name": "raid_bdev1", 00:18:57.647 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:57.647 "strip_size_kb": 0, 00:18:57.647 "state": "online", 00:18:57.647 "raid_level": "raid1", 00:18:57.647 "superblock": true, 00:18:57.647 "num_base_bdevs": 2, 00:18:57.647 "num_base_bdevs_discovered": 2, 00:18:57.647 "num_base_bdevs_operational": 2, 00:18:57.647 "base_bdevs_list": [ 00:18:57.647 { 00:18:57.647 "name": "spare", 00:18:57.647 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:57.647 "is_configured": true, 00:18:57.647 "data_offset": 256, 00:18:57.647 "data_size": 7936 00:18:57.647 }, 00:18:57.647 { 00:18:57.647 "name": "BaseBdev2", 00:18:57.647 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:57.647 "is_configured": true, 00:18:57.647 "data_offset": 256, 00:18:57.647 "data_size": 7936 00:18:57.647 } 00:18:57.647 ] 00:18:57.647 }' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 [2024-11-18 13:35:27.585283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.647 "name": "raid_bdev1", 00:18:57.647 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:57.647 "strip_size_kb": 0, 00:18:57.647 "state": "online", 00:18:57.647 "raid_level": "raid1", 00:18:57.647 "superblock": true, 00:18:57.647 "num_base_bdevs": 2, 00:18:57.647 "num_base_bdevs_discovered": 1, 00:18:57.647 "num_base_bdevs_operational": 1, 00:18:57.647 "base_bdevs_list": [ 00:18:57.647 { 00:18:57.647 "name": null, 00:18:57.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.647 "is_configured": false, 00:18:57.647 "data_offset": 0, 00:18:57.647 "data_size": 7936 00:18:57.647 }, 00:18:57.647 { 00:18:57.647 "name": "BaseBdev2", 00:18:57.647 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:57.647 "is_configured": true, 00:18:57.647 "data_offset": 256, 00:18:57.647 "data_size": 7936 00:18:57.647 } 00:18:57.647 ] 00:18:57.647 }' 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.647 13:35:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.216 13:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.216 13:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.216 13:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.216 [2024-11-18 13:35:28.064483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.216 [2024-11-18 13:35:28.064696] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.216 [2024-11-18 13:35:28.064766] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:58.216 [2024-11-18 13:35:28.064819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.216 [2024-11-18 13:35:28.079800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:58.216 13:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.216 13:35:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:58.216 [2024-11-18 13:35:28.081556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.155 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.155 "name": "raid_bdev1", 00:18:59.155 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:59.155 "strip_size_kb": 0, 00:18:59.155 "state": "online", 00:18:59.155 "raid_level": "raid1", 00:18:59.155 "superblock": true, 00:18:59.155 "num_base_bdevs": 2, 00:18:59.155 "num_base_bdevs_discovered": 2, 00:18:59.155 "num_base_bdevs_operational": 2, 00:18:59.155 "process": { 00:18:59.155 "type": "rebuild", 00:18:59.155 "target": "spare", 00:18:59.155 "progress": { 00:18:59.156 "blocks": 2560, 00:18:59.156 "percent": 32 00:18:59.156 } 00:18:59.156 }, 00:18:59.156 "base_bdevs_list": [ 00:18:59.156 { 00:18:59.156 "name": "spare", 00:18:59.156 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:18:59.156 "is_configured": true, 00:18:59.156 "data_offset": 256, 00:18:59.156 "data_size": 7936 00:18:59.156 }, 00:18:59.156 { 00:18:59.156 "name": "BaseBdev2", 00:18:59.156 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:59.156 "is_configured": true, 00:18:59.156 "data_offset": 256, 00:18:59.156 "data_size": 7936 00:18:59.156 } 00:18:59.156 ] 00:18:59.156 }' 00:18:59.156 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.156 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.156 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.415 [2024-11-18 13:35:29.233292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.415 [2024-11-18 13:35:29.286177] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.415 [2024-11-18 13:35:29.286234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.415 [2024-11-18 13:35:29.286247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.415 [2024-11-18 13:35:29.286256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.415 "name": "raid_bdev1", 00:18:59.415 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:18:59.415 "strip_size_kb": 0, 00:18:59.415 "state": "online", 00:18:59.415 "raid_level": "raid1", 00:18:59.415 "superblock": true, 00:18:59.415 "num_base_bdevs": 2, 00:18:59.415 "num_base_bdevs_discovered": 1, 00:18:59.415 "num_base_bdevs_operational": 1, 00:18:59.415 "base_bdevs_list": [ 00:18:59.415 { 00:18:59.415 "name": null, 00:18:59.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.415 "is_configured": false, 00:18:59.415 "data_offset": 0, 00:18:59.415 "data_size": 7936 00:18:59.415 }, 00:18:59.415 { 00:18:59.415 "name": "BaseBdev2", 00:18:59.415 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:18:59.415 "is_configured": true, 00:18:59.415 "data_offset": 256, 00:18:59.415 "data_size": 7936 00:18:59.415 } 00:18:59.415 ] 00:18:59.415 }' 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.415 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.984 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:59.984 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.984 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.984 [2024-11-18 13:35:29.754579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:59.984 [2024-11-18 13:35:29.754707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.984 [2024-11-18 13:35:29.754744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:59.984 [2024-11-18 13:35:29.754773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.984 [2024-11-18 13:35:29.754987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.984 [2024-11-18 13:35:29.755041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:59.984 [2024-11-18 13:35:29.755111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:59.984 [2024-11-18 13:35:29.755167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:59.984 [2024-11-18 13:35:29.755210] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:59.984 [2024-11-18 13:35:29.755273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:59.984 [2024-11-18 13:35:29.769866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:59.984 spare 00:18:59.984 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.984 [2024-11-18 13:35:29.771665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.984 13:35:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.923 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.923 "name": "raid_bdev1", 00:19:00.923 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:00.923 "strip_size_kb": 0, 00:19:00.923 "state": "online", 00:19:00.923 "raid_level": "raid1", 00:19:00.923 "superblock": true, 00:19:00.923 "num_base_bdevs": 2, 00:19:00.923 "num_base_bdevs_discovered": 2, 00:19:00.923 "num_base_bdevs_operational": 2, 00:19:00.923 "process": { 00:19:00.923 "type": "rebuild", 00:19:00.923 "target": "spare", 00:19:00.923 "progress": { 00:19:00.923 "blocks": 2560, 00:19:00.923 "percent": 32 00:19:00.923 } 00:19:00.923 }, 00:19:00.923 "base_bdevs_list": [ 00:19:00.923 { 00:19:00.923 "name": "spare", 00:19:00.923 "uuid": "5bb90b29-f72b-569d-a1fe-2cb152fe9b73", 00:19:00.924 "is_configured": true, 00:19:00.924 "data_offset": 256, 00:19:00.924 "data_size": 7936 00:19:00.924 }, 00:19:00.924 { 00:19:00.924 "name": "BaseBdev2", 00:19:00.924 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:00.924 "is_configured": true, 00:19:00.924 "data_offset": 256, 00:19:00.924 "data_size": 7936 00:19:00.924 } 00:19:00.924 ] 00:19:00.924 }' 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.924 13:35:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.924 [2024-11-18 13:35:30.931329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.184 [2024-11-18 13:35:30.976225] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:01.184 [2024-11-18 13:35:30.976281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.184 [2024-11-18 13:35:30.976297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.184 [2024-11-18 13:35:30.976304] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.184 "name": "raid_bdev1", 00:19:01.184 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:01.184 "strip_size_kb": 0, 00:19:01.184 "state": "online", 00:19:01.184 "raid_level": "raid1", 00:19:01.184 "superblock": true, 00:19:01.184 "num_base_bdevs": 2, 00:19:01.184 "num_base_bdevs_discovered": 1, 00:19:01.184 "num_base_bdevs_operational": 1, 00:19:01.184 "base_bdevs_list": [ 00:19:01.184 { 00:19:01.184 "name": null, 00:19:01.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.184 "is_configured": false, 00:19:01.184 "data_offset": 0, 00:19:01.184 "data_size": 7936 00:19:01.184 }, 00:19:01.184 { 00:19:01.184 "name": "BaseBdev2", 00:19:01.184 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:01.184 "is_configured": true, 00:19:01.184 "data_offset": 256, 00:19:01.184 "data_size": 7936 00:19:01.184 } 00:19:01.184 ] 00:19:01.184 }' 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.184 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.444 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.703 "name": "raid_bdev1", 00:19:01.703 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:01.703 "strip_size_kb": 0, 00:19:01.703 "state": "online", 00:19:01.703 "raid_level": "raid1", 00:19:01.703 "superblock": true, 00:19:01.703 "num_base_bdevs": 2, 00:19:01.703 "num_base_bdevs_discovered": 1, 00:19:01.703 "num_base_bdevs_operational": 1, 00:19:01.703 "base_bdevs_list": [ 00:19:01.703 { 00:19:01.703 "name": null, 00:19:01.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.703 "is_configured": false, 00:19:01.703 "data_offset": 0, 00:19:01.703 "data_size": 7936 00:19:01.703 }, 00:19:01.703 { 00:19:01.703 "name": "BaseBdev2", 00:19:01.703 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:01.703 "is_configured": true, 00:19:01.703 "data_offset": 256, 00:19:01.703 "data_size": 7936 00:19:01.703 } 00:19:01.703 ] 00:19:01.703 }' 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.703 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:01.704 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.704 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.704 [2024-11-18 13:35:31.648925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:01.704 [2024-11-18 13:35:31.649045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.704 [2024-11-18 13:35:31.649070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:01.704 [2024-11-18 13:35:31.649079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.704 [2024-11-18 13:35:31.649250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.704 [2024-11-18 13:35:31.649263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:01.704 [2024-11-18 13:35:31.649314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:01.704 [2024-11-18 13:35:31.649326] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.704 [2024-11-18 13:35:31.649335] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.704 [2024-11-18 13:35:31.649344] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:01.704 BaseBdev1 00:19:01.704 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.704 13:35:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.642 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.901 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.901 "name": "raid_bdev1", 00:19:02.901 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:02.901 "strip_size_kb": 0, 00:19:02.901 "state": "online", 00:19:02.901 "raid_level": "raid1", 00:19:02.901 "superblock": true, 00:19:02.901 "num_base_bdevs": 2, 00:19:02.901 "num_base_bdevs_discovered": 1, 00:19:02.901 "num_base_bdevs_operational": 1, 00:19:02.901 "base_bdevs_list": [ 00:19:02.901 { 00:19:02.901 "name": null, 00:19:02.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.901 "is_configured": false, 00:19:02.901 "data_offset": 0, 00:19:02.901 "data_size": 7936 00:19:02.901 }, 00:19:02.901 { 00:19:02.901 "name": "BaseBdev2", 00:19:02.901 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:02.901 "is_configured": true, 00:19:02.901 "data_offset": 256, 00:19:02.901 "data_size": 7936 00:19:02.901 } 00:19:02.901 ] 00:19:02.901 }' 00:19:02.901 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.901 13:35:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.161 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.161 "name": "raid_bdev1", 00:19:03.161 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:03.161 "strip_size_kb": 0, 00:19:03.161 "state": "online", 00:19:03.161 "raid_level": "raid1", 00:19:03.161 "superblock": true, 00:19:03.161 "num_base_bdevs": 2, 00:19:03.161 "num_base_bdevs_discovered": 1, 00:19:03.161 "num_base_bdevs_operational": 1, 00:19:03.161 "base_bdevs_list": [ 00:19:03.161 { 00:19:03.161 "name": null, 00:19:03.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.161 "is_configured": false, 00:19:03.161 "data_offset": 0, 00:19:03.161 "data_size": 7936 00:19:03.161 }, 00:19:03.161 { 00:19:03.162 "name": "BaseBdev2", 00:19:03.162 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:03.162 "is_configured": true, 00:19:03.162 "data_offset": 256, 00:19:03.162 "data_size": 7936 00:19:03.162 } 00:19:03.162 ] 00:19:03.162 }' 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.162 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.421 [2024-11-18 13:35:33.222272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.421 [2024-11-18 13:35:33.222421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.421 [2024-11-18 13:35:33.222437] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:03.421 request: 00:19:03.421 { 00:19:03.421 "base_bdev": "BaseBdev1", 00:19:03.421 "raid_bdev": "raid_bdev1", 00:19:03.421 "method": "bdev_raid_add_base_bdev", 00:19:03.421 "req_id": 1 00:19:03.421 } 00:19:03.421 Got JSON-RPC error response 00:19:03.421 response: 00:19:03.421 { 00:19:03.421 "code": -22, 00:19:03.421 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:03.421 } 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.421 13:35:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.361 "name": "raid_bdev1", 00:19:04.361 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:04.361 "strip_size_kb": 0, 00:19:04.361 "state": "online", 00:19:04.361 "raid_level": "raid1", 00:19:04.361 "superblock": true, 00:19:04.361 "num_base_bdevs": 2, 00:19:04.361 "num_base_bdevs_discovered": 1, 00:19:04.361 "num_base_bdevs_operational": 1, 00:19:04.361 "base_bdevs_list": [ 00:19:04.361 { 00:19:04.361 "name": null, 00:19:04.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.361 "is_configured": false, 00:19:04.361 "data_offset": 0, 00:19:04.361 "data_size": 7936 00:19:04.361 }, 00:19:04.361 { 00:19:04.361 "name": "BaseBdev2", 00:19:04.361 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:04.361 "is_configured": true, 00:19:04.361 "data_offset": 256, 00:19:04.361 "data_size": 7936 00:19:04.361 } 00:19:04.361 ] 00:19:04.361 }' 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.361 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.621 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.881 "name": "raid_bdev1", 00:19:04.881 "uuid": "70ab45c5-59e4-4f24-ac24-ea031d44835d", 00:19:04.881 "strip_size_kb": 0, 00:19:04.881 "state": "online", 00:19:04.881 "raid_level": "raid1", 00:19:04.881 "superblock": true, 00:19:04.881 "num_base_bdevs": 2, 00:19:04.881 "num_base_bdevs_discovered": 1, 00:19:04.881 "num_base_bdevs_operational": 1, 00:19:04.881 "base_bdevs_list": [ 00:19:04.881 { 00:19:04.881 "name": null, 00:19:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.881 "is_configured": false, 00:19:04.881 "data_offset": 0, 00:19:04.881 "data_size": 7936 00:19:04.881 }, 00:19:04.881 { 00:19:04.881 "name": "BaseBdev2", 00:19:04.881 "uuid": "7b614e24-64d6-5dc3-9111-0619b5154c34", 00:19:04.881 "is_configured": true, 00:19:04.881 "data_offset": 256, 00:19:04.881 "data_size": 7936 00:19:04.881 } 00:19:04.881 ] 00:19:04.881 }' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88976 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88976 ']' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88976 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88976 00:19:04.881 killing process with pid 88976 00:19:04.881 Received shutdown signal, test time was about 60.000000 seconds 00:19:04.881 00:19:04.881 Latency(us) 00:19:04.881 [2024-11-18T13:35:34.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.881 [2024-11-18T13:35:34.935Z] =================================================================================================================== 00:19:04.881 [2024-11-18T13:35:34.935Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88976' 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88976 00:19:04.881 13:35:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88976 00:19:04.881 [2024-11-18 13:35:34.828024] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:04.881 [2024-11-18 13:35:34.828149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.882 [2024-11-18 13:35:34.828198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.882 [2024-11-18 13:35:34.828213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:05.141 [2024-11-18 13:35:35.107772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.080 13:35:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:06.080 00:19:06.080 real 0m17.515s 00:19:06.080 user 0m23.070s 00:19:06.080 sys 0m1.663s 00:19:06.080 13:35:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.080 ************************************ 00:19:06.080 13:35:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:06.080 END TEST raid_rebuild_test_sb_md_interleaved 00:19:06.080 ************************************ 00:19:06.340 13:35:36 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:06.340 13:35:36 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:06.340 13:35:36 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88976 ']' 00:19:06.340 13:35:36 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88976 00:19:06.340 13:35:36 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:06.340 00:19:06.340 real 11m59.273s 00:19:06.340 user 16m8.446s 00:19:06.340 sys 1m58.110s 00:19:06.340 13:35:36 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.340 13:35:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.340 ************************************ 00:19:06.340 END TEST bdev_raid 00:19:06.340 ************************************ 00:19:06.340 13:35:36 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:06.340 13:35:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.340 13:35:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.340 13:35:36 -- common/autotest_common.sh@10 -- # set +x 00:19:06.340 ************************************ 00:19:06.340 START TEST spdkcli_raid 00:19:06.340 ************************************ 00:19:06.340 13:35:36 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:06.600 * Looking for test storage... 00:19:06.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.600 13:35:36 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.600 --rc genhtml_branch_coverage=1 00:19:06.600 --rc genhtml_function_coverage=1 00:19:06.600 --rc genhtml_legend=1 00:19:06.600 --rc geninfo_all_blocks=1 00:19:06.600 --rc geninfo_unexecuted_blocks=1 00:19:06.600 00:19:06.600 ' 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.600 --rc genhtml_branch_coverage=1 00:19:06.600 --rc genhtml_function_coverage=1 00:19:06.600 --rc genhtml_legend=1 00:19:06.600 --rc geninfo_all_blocks=1 00:19:06.600 --rc geninfo_unexecuted_blocks=1 00:19:06.600 00:19:06.600 ' 00:19:06.600 13:35:36 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.600 --rc genhtml_branch_coverage=1 00:19:06.600 --rc genhtml_function_coverage=1 00:19:06.601 --rc genhtml_legend=1 00:19:06.601 --rc geninfo_all_blocks=1 00:19:06.601 --rc geninfo_unexecuted_blocks=1 00:19:06.601 00:19:06.601 ' 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.601 --rc genhtml_branch_coverage=1 00:19:06.601 --rc genhtml_function_coverage=1 00:19:06.601 --rc genhtml_legend=1 00:19:06.601 --rc geninfo_all_blocks=1 00:19:06.601 --rc geninfo_unexecuted_blocks=1 00:19:06.601 00:19:06.601 ' 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:06.601 13:35:36 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89647 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:06.601 13:35:36 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89647 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89647 ']' 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.601 13:35:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.861 [2024-11-18 13:35:36.655758] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:06.861 [2024-11-18 13:35:36.655875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89647 ] 00:19:06.861 [2024-11-18 13:35:36.835123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.120 [2024-11-18 13:35:36.943134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.120 [2024-11-18 13:35:36.943184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.689 13:35:37 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.689 13:35:37 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:19:07.689 13:35:37 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:07.689 13:35:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.689 13:35:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.949 13:35:37 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:07.949 13:35:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.949 13:35:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.949 13:35:37 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:07.949 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:07.949 ' 00:19:09.329 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:09.329 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:09.588 13:35:39 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:09.588 13:35:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.588 13:35:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.588 13:35:39 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:09.588 13:35:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.588 13:35:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:09.589 13:35:39 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:09.589 ' 00:19:10.528 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:10.788 13:35:40 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:10.788 13:35:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.788 13:35:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.788 13:35:40 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:10.788 13:35:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.788 13:35:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.788 13:35:40 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:10.788 13:35:40 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:11.358 13:35:41 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:11.358 13:35:41 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:11.358 13:35:41 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:11.358 13:35:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.358 13:35:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.358 13:35:41 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:11.358 13:35:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.358 13:35:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:11.358 13:35:41 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:11.358 ' 00:19:12.298 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:12.298 13:35:42 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:12.298 13:35:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.298 13:35:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.558 13:35:42 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:12.558 13:35:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.558 13:35:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.558 13:35:42 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:12.558 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:12.558 ' 00:19:13.939 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:13.939 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:13.939 13:35:43 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.939 13:35:43 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89647 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89647 ']' 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89647 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89647 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.939 killing process with pid 89647 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89647' 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89647 00:19:13.939 13:35:43 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89647 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89647 ']' 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89647 00:19:16.480 13:35:46 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89647 ']' 00:19:16.480 13:35:46 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89647 00:19:16.480 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89647) - No such process 00:19:16.480 Process with pid 89647 is not found 00:19:16.480 13:35:46 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89647 is not found' 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:16.480 13:35:46 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:16.480 00:19:16.480 real 0m9.880s 00:19:16.480 user 0m20.272s 00:19:16.480 sys 0m1.194s 00:19:16.480 13:35:46 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.480 13:35:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.480 ************************************ 00:19:16.480 END TEST spdkcli_raid 00:19:16.480 ************************************ 00:19:16.480 13:35:46 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:16.480 13:35:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.480 13:35:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.480 13:35:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.480 ************************************ 00:19:16.480 START TEST blockdev_raid5f 00:19:16.480 ************************************ 00:19:16.480 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:16.480 * Looking for test storage... 00:19:16.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:16.480 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.480 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.480 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.480 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:16.480 13:35:46 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.481 13:35:46 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:16.481 13:35:46 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.481 13:35:46 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.481 13:35:46 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.481 13:35:46 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.481 --rc genhtml_branch_coverage=1 00:19:16.481 --rc genhtml_function_coverage=1 00:19:16.481 --rc genhtml_legend=1 00:19:16.481 --rc geninfo_all_blocks=1 00:19:16.481 --rc geninfo_unexecuted_blocks=1 00:19:16.481 00:19:16.481 ' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.481 --rc genhtml_branch_coverage=1 00:19:16.481 --rc genhtml_function_coverage=1 00:19:16.481 --rc genhtml_legend=1 00:19:16.481 --rc geninfo_all_blocks=1 00:19:16.481 --rc geninfo_unexecuted_blocks=1 00:19:16.481 00:19:16.481 ' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.481 --rc genhtml_branch_coverage=1 00:19:16.481 --rc genhtml_function_coverage=1 00:19:16.481 --rc genhtml_legend=1 00:19:16.481 --rc geninfo_all_blocks=1 00:19:16.481 --rc geninfo_unexecuted_blocks=1 00:19:16.481 00:19:16.481 ' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.481 --rc genhtml_branch_coverage=1 00:19:16.481 --rc genhtml_function_coverage=1 00:19:16.481 --rc genhtml_legend=1 00:19:16.481 --rc geninfo_all_blocks=1 00:19:16.481 --rc geninfo_unexecuted_blocks=1 00:19:16.481 00:19:16.481 ' 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89927 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:16.481 13:35:46 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89927 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89927 ']' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.481 13:35:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:16.753 [2024-11-18 13:35:46.590523] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:16.753 [2024-11-18 13:35:46.590662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89927 ] 00:19:16.753 [2024-11-18 13:35:46.766090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.042 [2024-11-18 13:35:46.880527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:17.999 Malloc0 00:19:17.999 Malloc1 00:19:17.999 Malloc2 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.999 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.999 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1088848c-0cdc-42e7-9670-6a09271e97f5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1088848c-0cdc-42e7-9670-6a09271e97f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1088848c-0cdc-42e7-9670-6a09271e97f5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "703a3ccf-ad0a-48dc-9e8b-8b1cd0b76de4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3a00798a-c1d5-47e9-9458-76bba1dc1038",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7a848558-449d-4ba5-999e-db9796df00bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:18.000 13:35:47 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89927 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89927 ']' 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89927 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:18.000 13:35:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89927 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.000 killing process with pid 89927 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89927' 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89927 00:19:18.000 13:35:48 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89927 00:19:20.539 13:35:50 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:20.539 13:35:50 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:20.539 13:35:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:20.539 13:35:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.539 13:35:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.539 ************************************ 00:19:20.539 START TEST bdev_hello_world 00:19:20.539 ************************************ 00:19:20.539 13:35:50 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:20.799 [2024-11-18 13:35:50.608341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:20.799 [2024-11-18 13:35:50.608471] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89993 ] 00:19:20.799 [2024-11-18 13:35:50.787463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.058 [2024-11-18 13:35:50.891239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.628 [2024-11-18 13:35:51.400395] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:21.628 [2024-11-18 13:35:51.400446] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:21.628 [2024-11-18 13:35:51.400463] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:21.628 [2024-11-18 13:35:51.400909] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:21.628 [2024-11-18 13:35:51.401038] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:21.628 [2024-11-18 13:35:51.401060] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:21.628 [2024-11-18 13:35:51.401103] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:21.628 00:19:21.628 [2024-11-18 13:35:51.401119] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:23.005 00:19:23.005 real 0m2.179s 00:19:23.005 user 0m1.811s 00:19:23.005 sys 0m0.248s 00:19:23.005 13:35:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.005 13:35:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:23.005 ************************************ 00:19:23.005 END TEST bdev_hello_world 00:19:23.005 ************************************ 00:19:23.005 13:35:52 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:23.005 13:35:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:23.005 13:35:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.005 13:35:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.005 ************************************ 00:19:23.005 START TEST bdev_bounds 00:19:23.005 ************************************ 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90035 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:23.005 Process bdevio pid: 90035 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90035' 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90035 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90035 ']' 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.005 13:35:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:23.005 [2024-11-18 13:35:52.852173] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:23.005 [2024-11-18 13:35:52.852273] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90035 ] 00:19:23.005 [2024-11-18 13:35:53.023027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:23.265 [2024-11-18 13:35:53.130640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.265 [2024-11-18 13:35:53.130797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.265 [2024-11-18 13:35:53.130828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.834 13:35:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.834 13:35:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:23.834 13:35:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:23.834 I/O targets: 00:19:23.834 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:23.834 00:19:23.834 00:19:23.834 CUnit - A unit testing framework for C - Version 2.1-3 00:19:23.834 http://cunit.sourceforge.net/ 00:19:23.834 00:19:23.834 00:19:23.834 Suite: bdevio tests on: raid5f 00:19:23.834 Test: blockdev write read block ...passed 00:19:23.834 Test: blockdev write zeroes read block ...passed 00:19:23.834 Test: blockdev write zeroes read no split ...passed 00:19:24.094 Test: blockdev write zeroes read split ...passed 00:19:24.094 Test: blockdev write zeroes read split partial ...passed 00:19:24.094 Test: blockdev reset ...passed 00:19:24.094 Test: blockdev write read 8 blocks ...passed 00:19:24.094 Test: blockdev write read size > 128k ...passed 00:19:24.094 Test: blockdev write read invalid size ...passed 00:19:24.094 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:24.094 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:24.094 Test: blockdev write read max offset ...passed 00:19:24.094 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:24.094 Test: blockdev writev readv 8 blocks ...passed 00:19:24.094 Test: blockdev writev readv 30 x 1block ...passed 00:19:24.094 Test: blockdev writev readv block ...passed 00:19:24.094 Test: blockdev writev readv size > 128k ...passed 00:19:24.094 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:24.094 Test: blockdev comparev and writev ...passed 00:19:24.094 Test: blockdev nvme passthru rw ...passed 00:19:24.094 Test: blockdev nvme passthru vendor specific ...passed 00:19:24.094 Test: blockdev nvme admin passthru ...passed 00:19:24.094 Test: blockdev copy ...passed 00:19:24.094 00:19:24.094 Run Summary: Type Total Ran Passed Failed Inactive 00:19:24.094 suites 1 1 n/a 0 0 00:19:24.094 tests 23 23 23 0 0 00:19:24.094 asserts 130 130 130 0 n/a 00:19:24.094 00:19:24.094 Elapsed time = 0.601 seconds 00:19:24.094 0 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90035 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90035 ']' 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90035 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90035 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.094 killing process with pid 90035 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90035' 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90035 00:19:24.094 13:35:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90035 00:19:25.474 13:35:55 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:25.474 00:19:25.474 real 0m2.681s 00:19:25.474 user 0m6.751s 00:19:25.474 sys 0m0.368s 00:19:25.474 13:35:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.474 13:35:55 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:25.474 ************************************ 00:19:25.474 END TEST bdev_bounds 00:19:25.474 ************************************ 00:19:25.474 13:35:55 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:25.474 13:35:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:25.474 13:35:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.474 13:35:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.474 ************************************ 00:19:25.474 START TEST bdev_nbd 00:19:25.474 ************************************ 00:19:25.474 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:25.474 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:25.734 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90091 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90091 /var/tmp/spdk-nbd.sock 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90091 ']' 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.735 13:35:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:25.735 [2024-11-18 13:35:55.622776] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:25.735 [2024-11-18 13:35:55.622881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.995 [2024-11-18 13:35:55.798857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.995 [2024-11-18 13:35:55.904526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:26.564 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.824 1+0 records in 00:19:26.824 1+0 records out 00:19:26.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00400318 s, 1.0 MB/s 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:26.824 { 00:19:26.824 "nbd_device": "/dev/nbd0", 00:19:26.824 "bdev_name": "raid5f" 00:19:26.824 } 00:19:26.824 ]' 00:19:26.824 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:27.084 { 00:19:27.084 "nbd_device": "/dev/nbd0", 00:19:27.084 "bdev_name": "raid5f" 00:19:27.084 } 00:19:27.084 ]' 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:27.084 13:35:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:27.084 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:27.343 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.603 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:27.603 /dev/nbd0 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.863 1+0 records in 00:19:27.863 1+0 records out 00:19:27.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369613 s, 11.1 MB/s 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.863 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:27.864 { 00:19:27.864 "nbd_device": "/dev/nbd0", 00:19:27.864 "bdev_name": "raid5f" 00:19:27.864 } 00:19:27.864 ]' 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:27.864 { 00:19:27.864 "nbd_device": "/dev/nbd0", 00:19:27.864 "bdev_name": "raid5f" 00:19:27.864 } 00:19:27.864 ]' 00:19:27.864 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:28.123 256+0 records in 00:19:28.123 256+0 records out 00:19:28.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130875 s, 80.1 MB/s 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.123 13:35:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:28.123 256+0 records in 00:19:28.123 256+0 records out 00:19:28.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295382 s, 35.5 MB/s 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.123 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.383 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:28.643 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:28.903 malloc_lvol_verify 00:19:28.903 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:28.903 b888b209-b141-47cd-88a0-ce0e8f94b906 00:19:28.903 13:35:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:29.163 26629ffd-beee-41a6-b2a6-832f435fd643 00:19:29.163 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:29.422 /dev/nbd0 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:29.422 mke2fs 1.47.0 (5-Feb-2023) 00:19:29.422 Discarding device blocks: 0/4096 done 00:19:29.422 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:29.422 00:19:29.422 Allocating group tables: 0/1 done 00:19:29.422 Writing inode tables: 0/1 done 00:19:29.422 Creating journal (1024 blocks): done 00:19:29.422 Writing superblocks and filesystem accounting information: 0/1 done 00:19:29.422 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.422 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.681 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90091 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90091 ']' 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90091 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90091 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.682 killing process with pid 90091 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90091' 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90091 00:19:29.682 13:35:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90091 00:19:31.063 13:36:00 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:31.063 00:19:31.063 real 0m5.448s 00:19:31.063 user 0m7.363s 00:19:31.063 sys 0m1.310s 00:19:31.063 13:36:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.063 13:36:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:31.063 ************************************ 00:19:31.063 END TEST bdev_nbd 00:19:31.063 ************************************ 00:19:31.063 13:36:01 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:31.063 13:36:01 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:31.063 13:36:01 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:31.063 13:36:01 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:31.063 13:36:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.063 13:36:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.063 13:36:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.063 ************************************ 00:19:31.063 START TEST bdev_fio 00:19:31.063 ************************************ 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:31.063 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:31.063 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:31.324 ************************************ 00:19:31.324 START TEST bdev_fio_rw_verify 00:19:31.324 ************************************ 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.324 13:36:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:31.584 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:31.584 fio-3.35 00:19:31.584 Starting 1 thread 00:19:43.797 00:19:43.797 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90290: Mon Nov 18 13:36:12 2024 00:19:43.797 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(496MiB/10001msec) 00:19:43.797 slat (usec): min=16, max=227, avg=18.45, stdev= 1.74 00:19:43.797 clat (usec): min=10, max=433, avg=126.18, stdev=43.60 00:19:43.797 lat (usec): min=29, max=452, avg=144.63, stdev=43.76 00:19:43.797 clat percentiles (usec): 00:19:43.797 | 50.000th=[ 131], 99.000th=[ 202], 99.900th=[ 227], 99.990th=[ 273], 00:19:43.797 | 99.999th=[ 416] 00:19:43.797 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9878msec); 0 zone resets 00:19:43.797 slat (usec): min=7, max=1135, avg=15.81, stdev= 4.79 00:19:43.797 clat (usec): min=57, max=1699, avg=291.13, stdev=40.30 00:19:43.797 lat (usec): min=72, max=1715, avg=306.94, stdev=41.25 00:19:43.797 clat percentiles (usec): 00:19:43.797 | 50.000th=[ 297], 99.000th=[ 363], 99.900th=[ 586], 99.990th=[ 1156], 00:19:43.797 | 99.999th=[ 1680] 00:19:43.797 bw ( KiB/s): min=50698, max=54824, per=98.96%, avg=52678.42, stdev=1319.47, samples=19 00:19:43.797 iops : min=12674, max=13706, avg=13169.58, stdev=329.91, samples=19 00:19:43.797 lat (usec) : 20=0.01%, 50=0.01%, 100=17.18%, 250=39.09%, 500=43.66% 00:19:43.797 lat (usec) : 750=0.04%, 1000=0.02% 00:19:43.797 lat (msec) : 2=0.01% 00:19:43.797 cpu : usr=98.83%, sys=0.41%, ctx=86, majf=0, minf=10364 00:19:43.797 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.797 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.797 issued rwts: total=126916,131461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.797 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:43.797 00:19:43.797 Run status group 0 (all jobs): 00:19:43.797 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=496MiB (520MB), run=10001-10001msec 00:19:43.797 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (538MB), run=9878-9878msec 00:19:43.797 ----------------------------------------------------- 00:19:43.797 Suppressions used: 00:19:43.797 count bytes template 00:19:43.797 1 7 /usr/src/fio/parse.c 00:19:43.797 602 57792 /usr/src/fio/iolog.c 00:19:43.797 1 8 libtcmalloc_minimal.so 00:19:43.797 1 904 libcrypto.so 00:19:43.797 ----------------------------------------------------- 00:19:43.797 00:19:44.057 00:19:44.057 real 0m12.656s 00:19:44.057 user 0m13.029s 00:19:44.057 sys 0m0.648s 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:44.057 ************************************ 00:19:44.057 END TEST bdev_fio_rw_verify 00:19:44.057 ************************************ 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "1088848c-0cdc-42e7-9670-6a09271e97f5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1088848c-0cdc-42e7-9670-6a09271e97f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "1088848c-0cdc-42e7-9670-6a09271e97f5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "703a3ccf-ad0a-48dc-9e8b-8b1cd0b76de4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3a00798a-c1d5-47e9-9458-76bba1dc1038",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7a848558-449d-4ba5-999e-db9796df00bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:44.057 13:36:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.057 /home/vagrant/spdk_repo/spdk 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:44.057 00:19:44.057 real 0m12.963s 00:19:44.057 user 0m13.155s 00:19:44.057 sys 0m0.797s 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.057 13:36:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:44.057 ************************************ 00:19:44.057 END TEST bdev_fio 00:19:44.057 ************************************ 00:19:44.057 13:36:14 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:44.057 13:36:14 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:44.057 13:36:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:44.057 13:36:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.057 13:36:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.057 ************************************ 00:19:44.057 START TEST bdev_verify 00:19:44.057 ************************************ 00:19:44.057 13:36:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:44.317 [2024-11-18 13:36:14.176576] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:44.317 [2024-11-18 13:36:14.176687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90454 ] 00:19:44.317 [2024-11-18 13:36:14.353036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:44.576 [2024-11-18 13:36:14.460793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.576 [2024-11-18 13:36:14.460798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.144 Running I/O for 5 seconds... 00:19:47.030 11054.00 IOPS, 43.18 MiB/s [2024-11-18T13:36:18.059Z] 10959.00 IOPS, 42.81 MiB/s [2024-11-18T13:36:18.998Z] 10994.67 IOPS, 42.95 MiB/s [2024-11-18T13:36:20.377Z] 10986.25 IOPS, 42.92 MiB/s [2024-11-18T13:36:20.377Z] 10963.20 IOPS, 42.83 MiB/s 00:19:50.323 Latency(us) 00:19:50.323 [2024-11-18T13:36:20.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.323 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.323 Verification LBA range: start 0x0 length 0x2000 00:19:50.323 raid5f : 5.02 4368.92 17.07 0.00 0.00 43914.08 270.09 30678.86 00:19:50.324 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.324 Verification LBA range: start 0x2000 length 0x2000 00:19:50.324 raid5f : 5.01 6582.83 25.71 0.00 0.00 29269.57 225.37 21406.52 00:19:50.324 [2024-11-18T13:36:20.378Z] =================================================================================================================== 00:19:50.324 [2024-11-18T13:36:20.378Z] Total : 10951.75 42.78 0.00 0.00 35116.02 225.37 30678.86 00:19:51.263 00:19:51.263 real 0m7.229s 00:19:51.263 user 0m13.372s 00:19:51.263 sys 0m0.275s 00:19:51.263 13:36:21 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.263 13:36:21 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:51.263 ************************************ 00:19:51.263 END TEST bdev_verify 00:19:51.523 ************************************ 00:19:51.523 13:36:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:51.523 13:36:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:51.523 13:36:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.523 13:36:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:51.523 ************************************ 00:19:51.523 START TEST bdev_verify_big_io 00:19:51.523 ************************************ 00:19:51.523 13:36:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:51.523 [2024-11-18 13:36:21.479733] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:51.523 [2024-11-18 13:36:21.479848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90551 ] 00:19:51.782 [2024-11-18 13:36:21.655569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.783 [2024-11-18 13:36:21.755716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.783 [2024-11-18 13:36:21.755741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.351 Running I/O for 5 seconds... 00:19:54.225 633.00 IOPS, 39.56 MiB/s [2024-11-18T13:36:25.659Z] 761.00 IOPS, 47.56 MiB/s [2024-11-18T13:36:26.597Z] 781.33 IOPS, 48.83 MiB/s [2024-11-18T13:36:27.535Z] 777.00 IOPS, 48.56 MiB/s [2024-11-18T13:36:27.795Z] 787.00 IOPS, 49.19 MiB/s 00:19:57.741 Latency(us) 00:19:57.741 [2024-11-18T13:36:27.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.741 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:57.741 Verification LBA range: start 0x0 length 0x200 00:19:57.741 raid5f : 5.22 340.73 21.30 0.00 0.00 9307829.59 380.98 390125.22 00:19:57.741 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:57.741 Verification LBA range: start 0x200 length 0x200 00:19:57.741 raid5f : 5.29 455.75 28.48 0.00 0.00 7051472.06 160.08 307704.40 00:19:57.741 [2024-11-18T13:36:27.795Z] =================================================================================================================== 00:19:57.741 [2024-11-18T13:36:27.795Z] Total : 796.49 49.78 0.00 0.00 8009171.68 160.08 390125.22 00:19:59.121 00:19:59.121 real 0m7.484s 00:19:59.121 user 0m13.905s 00:19:59.121 sys 0m0.266s 00:19:59.121 13:36:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.121 13:36:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.121 ************************************ 00:19:59.121 END TEST bdev_verify_big_io 00:19:59.121 ************************************ 00:19:59.121 13:36:28 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.121 13:36:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:59.121 13:36:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.121 13:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:59.121 ************************************ 00:19:59.121 START TEST bdev_write_zeroes 00:19:59.121 ************************************ 00:19:59.121 13:36:28 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:59.121 [2024-11-18 13:36:29.036371] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:59.121 [2024-11-18 13:36:29.036471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90649 ] 00:19:59.380 [2024-11-18 13:36:29.208456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.380 [2024-11-18 13:36:29.311382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.949 Running I/O for 1 seconds... 00:20:00.886 30471.00 IOPS, 119.03 MiB/s 00:20:00.886 Latency(us) 00:20:00.886 [2024-11-18T13:36:30.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.886 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.887 raid5f : 1.01 30441.21 118.91 0.00 0.00 4193.77 1209.12 5866.76 00:20:00.887 [2024-11-18T13:36:30.941Z] =================================================================================================================== 00:20:00.887 [2024-11-18T13:36:30.941Z] Total : 30441.21 118.91 0.00 0.00 4193.77 1209.12 5866.76 00:20:02.269 00:20:02.269 real 0m3.174s 00:20:02.269 user 0m2.809s 00:20:02.269 sys 0m0.239s 00:20:02.269 13:36:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.269 13:36:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:02.269 ************************************ 00:20:02.269 END TEST bdev_write_zeroes 00:20:02.269 ************************************ 00:20:02.269 13:36:32 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.269 13:36:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:02.269 13:36:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.269 13:36:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:02.269 ************************************ 00:20:02.269 START TEST bdev_json_nonenclosed 00:20:02.269 ************************************ 00:20:02.269 13:36:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.269 [2024-11-18 13:36:32.299116] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:02.269 [2024-11-18 13:36:32.299271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90702 ] 00:20:02.529 [2024-11-18 13:36:32.478311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.790 [2024-11-18 13:36:32.583668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.790 [2024-11-18 13:36:32.583758] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:02.790 [2024-11-18 13:36:32.583785] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.790 [2024-11-18 13:36:32.583796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.790 00:20:02.790 real 0m0.619s 00:20:02.790 user 0m0.369s 00:20:02.790 sys 0m0.145s 00:20:02.790 13:36:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.790 13:36:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:02.790 ************************************ 00:20:02.790 END TEST bdev_json_nonenclosed 00:20:02.790 ************************************ 00:20:03.050 13:36:32 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:03.050 13:36:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:03.050 13:36:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.050 13:36:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 ************************************ 00:20:03.050 START TEST bdev_json_nonarray 00:20:03.050 ************************************ 00:20:03.050 13:36:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:03.050 [2024-11-18 13:36:32.981971] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:03.050 [2024-11-18 13:36:32.982077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90728 ] 00:20:03.311 [2024-11-18 13:36:33.155360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.311 [2024-11-18 13:36:33.260988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.311 [2024-11-18 13:36:33.261086] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:03.311 [2024-11-18 13:36:33.261103] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:03.311 [2024-11-18 13:36:33.261120] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:03.572 00:20:03.572 real 0m0.601s 00:20:03.572 user 0m0.370s 00:20:03.572 sys 0m0.126s 00:20:03.572 13:36:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.572 13:36:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:03.572 ************************************ 00:20:03.572 END TEST bdev_json_nonarray 00:20:03.572 ************************************ 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:03.572 13:36:33 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:03.572 00:20:03.572 real 0m47.322s 00:20:03.572 user 1m4.195s 00:20:03.572 sys 0m4.922s 00:20:03.572 13:36:33 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.572 13:36:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:03.572 ************************************ 00:20:03.572 END TEST blockdev_raid5f 00:20:03.572 ************************************ 00:20:03.832 13:36:33 -- spdk/autotest.sh@194 -- # uname -s 00:20:03.832 13:36:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:03.832 13:36:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.832 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.832 13:36:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:03.832 13:36:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:03.832 13:36:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:03.832 13:36:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:03.832 13:36:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.832 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.832 13:36:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:03.832 13:36:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:03.832 13:36:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:03.832 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:20:06.374 INFO: APP EXITING 00:20:06.374 INFO: killing all VMs 00:20:06.374 INFO: killing vhost app 00:20:06.374 INFO: EXIT DONE 00:20:06.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.635 Waiting for block devices as requested 00:20:06.635 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:06.895 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:07.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.835 Cleaning 00:20:07.835 Removing: /var/run/dpdk/spdk0/config 00:20:07.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:07.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:07.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:07.835 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:07.835 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:07.835 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:07.835 Removing: /dev/shm/spdk_tgt_trace.pid56907 00:20:07.835 Removing: /var/run/dpdk/spdk0 00:20:07.835 Removing: /var/run/dpdk/spdk_pid56672 00:20:07.835 Removing: /var/run/dpdk/spdk_pid56907 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57141 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57251 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57296 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57435 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57453 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57663 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57775 00:20:07.835 Removing: /var/run/dpdk/spdk_pid57882 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58004 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58112 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58151 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58188 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58264 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58370 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58817 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58887 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58961 00:20:07.835 Removing: /var/run/dpdk/spdk_pid58977 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59129 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59145 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59302 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59318 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59387 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59411 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59475 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59493 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59699 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59741 00:20:07.835 Removing: /var/run/dpdk/spdk_pid59830 00:20:07.835 Removing: /var/run/dpdk/spdk_pid61223 00:20:07.835 Removing: /var/run/dpdk/spdk_pid61440 00:20:07.835 Removing: /var/run/dpdk/spdk_pid61580 00:20:07.835 Removing: /var/run/dpdk/spdk_pid62229 00:20:07.835 Removing: /var/run/dpdk/spdk_pid62446 00:20:07.835 Removing: /var/run/dpdk/spdk_pid62586 00:20:07.835 Removing: /var/run/dpdk/spdk_pid63229 00:20:07.835 Removing: /var/run/dpdk/spdk_pid63564 00:20:07.835 Removing: /var/run/dpdk/spdk_pid63706 00:20:08.096 Removing: /var/run/dpdk/spdk_pid65091 00:20:08.096 Removing: /var/run/dpdk/spdk_pid65343 00:20:08.096 Removing: /var/run/dpdk/spdk_pid65490 00:20:08.096 Removing: /var/run/dpdk/spdk_pid66876 00:20:08.096 Removing: /var/run/dpdk/spdk_pid67128 00:20:08.096 Removing: /var/run/dpdk/spdk_pid67275 00:20:08.096 Removing: /var/run/dpdk/spdk_pid68656 00:20:08.096 Removing: /var/run/dpdk/spdk_pid69100 00:20:08.096 Removing: /var/run/dpdk/spdk_pid69246 00:20:08.096 Removing: /var/run/dpdk/spdk_pid70728 00:20:08.096 Removing: /var/run/dpdk/spdk_pid70997 00:20:08.096 Removing: /var/run/dpdk/spdk_pid71138 00:20:08.096 Removing: /var/run/dpdk/spdk_pid72629 00:20:08.096 Removing: /var/run/dpdk/spdk_pid72893 00:20:08.096 Removing: /var/run/dpdk/spdk_pid73039 00:20:08.096 Removing: /var/run/dpdk/spdk_pid74531 00:20:08.096 Removing: /var/run/dpdk/spdk_pid75018 00:20:08.096 Removing: /var/run/dpdk/spdk_pid75164 00:20:08.096 Removing: /var/run/dpdk/spdk_pid75313 00:20:08.096 Removing: /var/run/dpdk/spdk_pid75731 00:20:08.096 Removing: /var/run/dpdk/spdk_pid76461 00:20:08.096 Removing: /var/run/dpdk/spdk_pid76837 00:20:08.096 Removing: /var/run/dpdk/spdk_pid77522 00:20:08.096 Removing: /var/run/dpdk/spdk_pid77957 00:20:08.096 Removing: /var/run/dpdk/spdk_pid78705 00:20:08.096 Removing: /var/run/dpdk/spdk_pid79114 00:20:08.096 Removing: /var/run/dpdk/spdk_pid81078 00:20:08.096 Removing: /var/run/dpdk/spdk_pid81522 00:20:08.096 Removing: /var/run/dpdk/spdk_pid81961 00:20:08.096 Removing: /var/run/dpdk/spdk_pid84058 00:20:08.096 Removing: /var/run/dpdk/spdk_pid84538 00:20:08.096 Removing: /var/run/dpdk/spdk_pid85060 00:20:08.096 Removing: /var/run/dpdk/spdk_pid86120 00:20:08.096 Removing: /var/run/dpdk/spdk_pid86443 00:20:08.096 Removing: /var/run/dpdk/spdk_pid87384 00:20:08.096 Removing: /var/run/dpdk/spdk_pid87714 00:20:08.096 Removing: /var/run/dpdk/spdk_pid88648 00:20:08.096 Removing: /var/run/dpdk/spdk_pid88976 00:20:08.096 Removing: /var/run/dpdk/spdk_pid89647 00:20:08.096 Removing: /var/run/dpdk/spdk_pid89927 00:20:08.096 Removing: /var/run/dpdk/spdk_pid89993 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90035 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90275 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90454 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90551 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90649 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90702 00:20:08.096 Removing: /var/run/dpdk/spdk_pid90728 00:20:08.096 Clean 00:20:08.357 13:36:38 -- common/autotest_common.sh@1453 -- # return 0 00:20:08.357 13:36:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:20:08.357 13:36:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.357 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.357 13:36:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:20:08.357 13:36:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.357 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:20:08.357 13:36:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:08.357 13:36:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:08.357 13:36:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:08.357 13:36:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:20:08.357 13:36:38 -- spdk/autotest.sh@398 -- # hostname 00:20:08.357 13:36:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:08.617 geninfo: WARNING: invalid characters removed from testname! 00:20:35.249 13:37:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:36.189 13:37:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:38.098 13:37:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:40.640 13:37:10 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:43.183 13:37:12 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:45.093 13:37:14 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:47.002 13:37:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:47.002 13:37:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:47.002 13:37:16 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:47.002 13:37:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:47.002 13:37:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:47.002 13:37:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:47.002 + [[ -n 5418 ]] 00:20:47.002 + sudo kill 5418 00:20:47.013 [Pipeline] } 00:20:47.030 [Pipeline] // timeout 00:20:47.035 [Pipeline] } 00:20:47.050 [Pipeline] // stage 00:20:47.056 [Pipeline] } 00:20:47.070 [Pipeline] // catchError 00:20:47.081 [Pipeline] stage 00:20:47.084 [Pipeline] { (Stop VM) 00:20:47.096 [Pipeline] sh 00:20:47.380 + vagrant halt 00:20:49.930 ==> default: Halting domain... 00:20:58.077 [Pipeline] sh 00:20:58.413 + vagrant destroy -f 00:21:00.967 ==> default: Removing domain... 00:21:00.981 [Pipeline] sh 00:21:01.267 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:01.277 [Pipeline] } 00:21:01.292 [Pipeline] // stage 00:21:01.297 [Pipeline] } 00:21:01.311 [Pipeline] // dir 00:21:01.316 [Pipeline] } 00:21:01.330 [Pipeline] // wrap 00:21:01.336 [Pipeline] } 00:21:01.349 [Pipeline] // catchError 00:21:01.358 [Pipeline] stage 00:21:01.360 [Pipeline] { (Epilogue) 00:21:01.373 [Pipeline] sh 00:21:01.658 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:05.872 [Pipeline] catchError 00:21:05.874 [Pipeline] { 00:21:05.887 [Pipeline] sh 00:21:06.174 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:06.174 Artifacts sizes are good 00:21:06.184 [Pipeline] } 00:21:06.198 [Pipeline] // catchError 00:21:06.210 [Pipeline] archiveArtifacts 00:21:06.218 Archiving artifacts 00:21:06.314 [Pipeline] cleanWs 00:21:06.328 [WS-CLEANUP] Deleting project workspace... 00:21:06.328 [WS-CLEANUP] Deferred wipeout is used... 00:21:06.336 [WS-CLEANUP] done 00:21:06.338 [Pipeline] } 00:21:06.353 [Pipeline] // stage 00:21:06.358 [Pipeline] } 00:21:06.372 [Pipeline] // node 00:21:06.380 [Pipeline] End of Pipeline 00:21:06.419 Finished: SUCCESS